Load Testing with JMETER - introduction

Load testing is the process of applying load to an application to see if it can perform as intended under normal conditions.

It is normally done with specialized tools like Load Runner or JMeter.

This type of testing is much more complex than manual testing or even test automation as it requires very diverse knowledge and skills.

The performance tester should have good understanding of not only the application under test but also of 

- http protocol, web requests and responses

- server configuration and monitoring

- scripting

- regular expressions

- log parsing

- application analytics


Some performance testing theory is useful to see the big picture.

The following info has been borrowed from the following free, online resource:
Performance Testing Guidance for Web Applications

Purpose of performance testing

  • Apply normal load to an application to see if it can perform as intended under normal   conditions
  • Ensures that a given function, program, or system can simply handle what it’s designed to handle
  • Related to Stress Testing: overload things until they break, applying unrealistic or unlikely load scenarios
  • Measures:

1. response times

2. throughput rates

3. resource-utilization levels

4. identify your application’s breaking point

 Approach for performance testing

1.Identify the Test Environment

    • Identify the physical test environment and the production environment as well as the tools and resources available to the test team.
    • Physical environment:
      • Hardware
      • Software
      • Network configurations

2. Identify Performance Acceptance Criteria

      • the response time (user concern)
      • throughput (business concern)
      • resource utilization goals and constraints (system concern)

3. Plan and Design Tests

    • key scenarios
    • determine variability among representative users
    • how to simulate that variability
    • define test data
    • establish metrics to be collected. 

4. Configure the Test Environment, Tools and Resources

Ensure that the test environment is instrumented for resource monitoring.

5. Implement the Test Design

Develop the performance tests in accordance with the test design.

6. Execute the Test

    • Run and monitor your tests
    • Validate the tests, test data, and results collection
    • Execute validated tests while monitoring the test and the test environment.

7. Analyse Results, Report, and Retest

    • Consolidate and share results data
    • Analyse the data both individually and as a cross-functional team
    • Re-prioritize the remaining tests and re-execute them as needed
    • When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.

Why Do Performance Testing?

  • Assessing release readiness
    • Helps estimating the performance characteristics of an application
    • Helps determining if an application is capable of handling future growth
  • Providing data indicating the likelihood of user dissatisfaction with the performance characteristics of the system.
  • Assessing infrastructure adequacy
    • Evaluating the adequacy of current capacity
    • Determining the acceptability of stability
    • Determining the capacity of the application’s infrastructure, as well as determining the future resources required
  • Assessing adequacy of developed software performance
    • Determine the application’s desired performance characteristics before and after changes to the software.
    • Provide comparisons between the application’s current and desired performance characteristics.
  • Improving the efficiency of performance tuning
    • Analyzing the behavior of the application at various load levels
    • Identifying bottlenecks in the application
    • Providing information related to the speed, scalability, and stability of a product prior to production release


    • Process of running a set of tests to capture performance metric data for the purpose of evaluating the effectiveness of subsequent performance-improving changes to the system or application. 
    • A critical aspect of a baseline is that all characteristics and configuration options except those specifically being varied for comparison must remain invariant.
    • A baseline can be created for a system, component, or application. 
    • A baseline can set the standard for comparison, to track future optimizations or regressions.  

The load testing tools are operating differently from browsers and test automation tools.

They are 

- not browsers

- do not perform all actions supported by browsers

- do not render html pages

- do not execute javascript code included in the html pages

- work at the protocol level (http protocol for web sites), submit web requests to the web server and then process the response from the server.

Most of the load testing tools work for multiple protocols so they can test

- web sites

- database systems

- web services

- client-server apps

Usually, the following components will be found in a load testing environment:

- target machine that hosts the application under test

- master load testing server (load controller): generates the load requests that either send them directly to the target machine or sends them to the slave load testing servers

- slave load testing servers (load generator): get the web requests from the master load testing server and send them to the target machine

Load Runner environment: master (controller) server, slave (load generators) servers, target servers

Finally, a bit of Http protocol theory is needed to explain briefly the concept of http transaction, request, response header and response body.

  • HTTP uses the client-server model: An HTTP client opens a connection and sends a request message to an HTTP server; the server then returns a response message, usually containing the resource that was requested. After delivering the response, the server closes the connection (making HTTP a stateless protocol, i.e. not maintaining any connection information between transactions).
  • The format of the request and response messages are similar, and English-oriented. Both kinds of messages consist of:

    • an initial line,
    • zero or more header lines,
    • a blank line (i.e. a CRLF by itself), and
    • an optional message body (e.g. a file, or query data, or query output).
  • Put another way, the format of an HTTP message is:
       <initial line, different for request vs. response>
       Header1: value1
       Header2: value2
       Header3: value3

       <optional message body goes here, like file contents or query data;
      it can be many lines long, or even binary data $&*%@!^$@>

The above info is borrowed from http://www.jmarshall.com/easy/http/
Please read more about the HTTP protocol on the hosting site.

Having discussed about all of the above, we can start looking at JMETER.

Share this