Create a load test plan with JMeter

Related articles:

Become a technical tester while learning JMeter
Performance testing checklist

New posts: 

Selenium Basics



JMeter concepts used in this example:

- controllers: if, simple, foreach

- regular expression extractors

- user defined variables

- dynamic http requests

- response assertions

- duration assertions

- executing javascript functions

- bean shell assertions

- reading and writing user variables in a bean shell script

- writing to the jmeter log

- using listeners (view results tree)



Load Test Plan - requirements



Main URL: http://jmeter.apache.org/

step 1. Open the http://jmeter.apache.org/changes.html

step 2. Do a few validations

         a. check that version number = Version 2.10
        b. check that page load time < 1000ms
 
step 3. At the bottom of the changes.html page, there are many bug fixes links

step 4. Get the ids of all bug fixes

step 5. Go through all bug fix ids and repeat the following actions:

         a. Print the bug order number and bug id in the JMETER log
 
b. If the bug id < 58000
 
b1. Open the bug page

b2. Run a few assertions on the bug page

b2.1 Check that ASF Bugzilla exists in the page

b2.2 Check that the bug id exists in the page




Load Test Plan set up



1. Create a Thread Group with the following settings:


action to be taken after a sampler error = continue
number of threads (users) = 10
ramp-up period = 1
loop count = 1

2. Create a Simple Controller inside of the thread group; name it "Practice Tests"

3. Create a Simple Controller inside of Practice Tests; name it "Open Bug pages based on the results of a regular expression"

4. In the last Simple Controller, create the following:

- User Defined Variables config element

* it will include the variables used in the requests and assertions


- Only Once Controller

* it will be used for opening the changes.html page only once
* name it "Open The Changes page"



- ForEach controller

* it will be used for opening the bug id pages
* name it "Open Bug Id pages"

5. Add a View Results Tree listener at the same level with the "Practice Tests" Controller; this will be used for getting results from the load test



Click the image for the original photo




Create the Tess Plan details

1. in the User Defined Variables section, add the following variables:

BugIds

used for getting all bug fix id values from the changes.html page
the variable will store a list of bug id strings
default value = 0

ResultValue

used for getting the individual bug ids from the BugIds variable
default value = 0

NumberOfBugs

used for getting the bug count in the changes.html page
default value = 0



Click the image for the original photo



2. in the "Open The Changes page" controller, create a http request with the following details:


*** this matches requirement 1
Name = Changes Page
Server Name or IP = jmeter.apache.org
Path = /changes.html


3. Add 2 assertions to the "Changes Page" http request


*** this matches requirement 2


a. response assertion

Name = check version number
Response Field To Test = text response
Patterns to test = Version 2.10



Click the image for the original photo



b. duration assertion

Name = check page load time
Duration to assert = 1000 ms


Click the image for the original photo



4. Add a Regular Expression extractor post-processor to the "Changes Page" http request


*** this matches requirement 4
Name = GetBugsIds

Response Field To Check = body
Reference Name = BugIds (defined in User Defined Variables)
Regular Expression = (id=([0-9]{5}))+ 
Template = $1$
Match = -1 (important if using with foreach controller)
Default Value = nothing

This is how the regular expression will work:

- it will find all id = xxxxx values in the page and add them to the BugIds variable
- the BugIds variable will be a collection of bug ids



Click the image for the original photo




5. Add the following to the "Open Bug Id pages" ForEach controller:

*** this matches requirement 5

- Controller Settings:

input variable prefix = BugIds (defined in User Defined Variables and instanced by the GetBugsIds regular exp extractor)
output variable name = ResultValue (defined in User Defined Variables)

This is how the controller works: 
   for each iteration, it will assign one bugid value from BugIds to the ResultValue variable.


Click the image for the original photo




- Create a bean shell sampler for displaying the bug count and id in the jmeter log:

*** this matches requirement 5a

Name = Print Bug Number and ID (inline)
String myNumberOfBugs = vars.get("NumberOfBugs");
Double myBugNumber = 0; 
myBugNumber = Double.parseDouble(myNumberOfBugs); 
myBugNumber = myBugNumber + 1;
vars.put("NumberOfBugs", myBugNumber.toString()); 
String myResultValue = vars.get("ResultValue"); 
String myBugId = "";
myBugId = myResultValue.substring(3,8);
log.info("bug number: " + myBugNumber.toString() + " - bug id = " + myBugId);

Click the image for the original photo



- Create an If Controller so only specific bug id pages are opened:

*** this matches requirement 5b


Name = Open Bug Details page if bug id < 58000
Condition = ${__javaScript('${ResultValue}'.substring(3\,8))} < 58000
  This condition evaluates if the bug id (xxxxx) is smaller than 5800;
  If the condition is true, then the next http request is executed;
  If the condition is false, then the next http request is not executed;


Click the image for the original photo
    

- In the "Open Bug Details page if bug id < 58000" If Controller, create a http request:

                                                                                                       *** this matches requirement 5b1


Name = Bug Details Page
Server name or IP = issues.apache.org/bugzilla
Path = /show_bug.cgi?${ResultValue}


Click the image for the original photo

 

- In the "Bug Details Page" http request, add a bean shell assertion:

                                                                                                       *** this matches requirement 5b2


Name = assertions
String myResultValue = vars.get("ResultValue"); 
String myBugId = "";
myBugId = myResultValue.substring(3,8); 
String body= new String(ResponseData); 
if(!body.contains("ASF Bugzilla")) 
log.warn("- could not find ASF Bugzilla in the page");
else
log.info("- found ASF Bugzilla in the page");
if(!body.contains(myBugId) ) 
log.warn("- could not find the bug id in the page");
  else    
log.info("- found the bug id in the page");


Click the image for the original photo



Run the load test

Web Testing Exploratory Tours

This is the list of exploratory testing tours that James Whittaker (former Google QA director, working for Microsoft again currently) has in his book on exploratory testing:

The Guidebook tour

Follow the user manual’s advice just like the wary traveler, by never deviating from its lead.

The money tour

For exploratory testers finding the money features leads directly to the sales force. Sales folk spend a great deal of time giving demos of applications and are a fantastic source of  information for the Money tour. To execute the tour, simply run through the demos yourself and look for problems.

The landmark tour

Choose a set of landmarks, decide on an ordering for them,  and then explore the application going from landmark to  landmark until you’ve visited all of them in your list. Keep track  of which landmarks you’ve used and create a landmark  coverage map to track your progress.

The intellectual tour

This tour takes on the approach of asking the software hard questions. How do we make the software work as  hard as possible? Which features will stretch it to its limits? What inputs and data will cause it to perform the most  processing? Which inputs might fool its error-checking  routines? Which inputs and internal data will stress  its capability to produce any specific output?

The fedex tour

During this tour, a tester must concentrate on this data. Try to identify inputs that are stored and “follow” them  around the software.

The garbage collector's tour

This is like a methodical spot check. We can decide to spot check the interface where we go screen by screen, dialog by dialog (favoring, like the garbage collector, the shortest route), and not stopping to test in detail, but checking the obvious things.

The bad neighbourhood tour

As bugs are found and reported, we can connect certain features with bug counts 
and can track where bugs are occurring on our product. Because bugs tend to congregate, 
revisiting buggy sections of the product is a tour worth taking. Indeed, once a buggy section of 
code is identified, it is recommended to take a Garbage Collector’s tour through nearby features to verify that the fixes didn’t introduce any new bugs.

The museum tour

Software’s antiquities are legacy code. Older code files that undergo revision or that are put into a new environment tend to be failure prone. With the original developers long gone and documentation  
often poor, legacy code is hard to modify,  hard to review, and evades the unit testing  
net of developers (who usually write such  tests only for new code). During this tour,  testers should identify older code and  executable artifacts and ensure they receive a fair share of testing attention.

The back alley tour

Test the least likely features to be  used and the ones that are the least  attractive to users. If your organization  tracks feature usage, this tour will direct  you to test the ones at the bottom of   
the list. If your organization tracks code  coverage, this tour implores you to find   
ways to test the code yet to be covered.

The all nighter tour

Exploratory testers on the All-Nighter tour will keep their  application running without closing it. They will open files  and not close them. Often, they don’t even bother saving  them so as to avoid any potential resetting effect that might occur at save time. They connect to remote resources and never disconnect. And while all these resources are in constant use, they may even run tests using other tours to keep the software working and moving data around.  If they do this long enough, they may find bugs that other  testers will not find because the software is denied that clean reset that occurs when it is restarted.

The supermodel tour

During the Supermodel tour, the focus is not on functionality or real interaction. It’s only on the interface. Take the tour and watch the interface elements. Do they look good? Do they render properly, and is the performance good? As you make changes, does the GUI refresh properly? Does it do so correctly or are there unsightly artifacts left on the screen? If the software is using color in a way to convey some meaning,     
is this done consistently? Are the GUI panels internally consistent with buttons and controls where you would expect them to be? Does the interface violate any conventions or standards?

The couch potato tour

A Coach Potato tour means doing as little actual work as possible. This means accepting all default values, leaving input fields blank, filling in as little form data as possible, never clicking on an advertisement, paging through screens without clicking any buttons or entering any data, and so forth. If there is any      
choice to go one way in the application or another, the coach potato always takes the path of least resistance.

The obsessive compulsive tour

OCD testers will enter the same input over  and over. They will perform the same action  over and over. They will repeat, redo, copy,  paste, borrow, and then do all that some  more. Mostly, the name of the game is repetition. Order an item on a shopping site and then order it again to check if a multiple purchase discount applies. Enter some data on a screen, then return immediately to enter it again. These are actions developers often don’t program error cases for. They can wreak significant havoc.

More details on these tours can be found on this article from James Whittaker's blog and in his exploratory testing book.

Mobile Testing Tours

There are a few different ways of doing exploratory testing:


  1. test the application using end-to-end scripted user scenarios and do exploratory testing along the way; the exploratory testing adds variation to the end-to-end scenarios
  2. do exploratory testing on the application components
  3. use testing tours
These 3 different approaches are not excluding each other and offer different perspectives to the tester.

For the testing tours, the focus is not so much on the structure of the application but on the intention of the tester. 

The tester will choose a mix of application features and then combine them in a way that matches his testing intention.

The following tours are applicable for testing mobile applications:

Gesture tour


  • On every screen of the application and every object, try each of the possible gestures
        1. Double Tap
        2. Tap
        3. Press
        4. Press and Drag
        5. Swipe
        6. Flick
        7. Shake
        8. Rotate
        9. Pinch
        10. Spread

Orientation tour


  • Work through each screen in portrait, then turn the device on its side and try to repeat in landscape mode. 
  • Change orientation on each screen. Try in portrait, rotate, try in landscape, then rotate back to portrait, move to the next screen.

Change Your Mind tour


  • Go Back a Screen - use the app back functionality or the back button of the device (if any)
  • Get Me Out - try to go to the beginning of the app in the middle of a user flow
  • Cancel! - Try to cancel the current user flow
  • Close The App and Re-open
  • Force App to Background, Reopen - After being re-opened, the app should be in the same state as before being put in the background

Accessories tour


  • Connect physical accessories while using the app (headphones, connect device to pc, keyboard, use a stylus, etc)
  • Use wireless accessories while using the app

Motion tour
  • Move the arm holding the device in different ways during testing the app

Light tour
  • Use the app with different light conditions, both inside and outside

Location tour


  • Use the app while moving around
  • Use the app close to tall buildings
  • Use the app in different weather conditions
  • Use the app inside different building types

Connectivity tour
  • Use the app with different types of connections;  move from one type of connection to another:
    • Strong wifi (3 or 4 bars) to weaker (1 bar)
    • Move from Wifi to a cellular network
    • Move from a cellular network to wifi
    • From one wifi source to another
    • From connected to no connection (moving into a dead spot)
    • Move from a dead spot to a wireless network connection

Weather tour
  • Try the application outside, in different weather conditions.

Comparison tour
  • Compare the app between 2 different devices with the same operating system
  • Compare the app between devices with different operating systems

Sound tour


Combination tour
  • Cause multiple technologies to work together on the device:
    • Move the device while interacting with the application using gestures or inputting text, causing touch screen sensors, gesture interpretation and movement sensors to work together.
    • Repeat the above, such as gestures or text inputs or web requests while moving between network connection types.
    • Use the device while doing something else (watching TV, walking)

Consistency tour
  • Check if all screens and app features are consistent with the recommended design practices for that particular OS

User tour
  • Use the app from the perspective of different types of users

Lounge tour
  • Try the application while lying down on a couch, or a comfortable chair.
  • Use the application while lying in bed.

Low battery tour
  • Use the app when the battery is low
  • Use the app while the battery is being charged

Temperature tour
  • Use the app both in high temperature and low temperature

Multi Screen tour


Pressed For Resources tour
  • Use the app when the device doesnt have sufficient resources:
    • weak wifi
    • bad wifi
    • low battery
    • with lots of apps in the background
    • low space on the storage card
    • when the cpu is high

Emotions tour


I am in a rush tour
  • Use the app as fast as possible; 
  • Complete user flows as quickly as you can

Slow As A Snail tour
  • Use the app at a very slow pace; take a few seconds breaks between actions; let the device lock, then unlock it and continue; dont use the device for a few minutes and then continue





More details for these tours can be found in TAP INTO MOBILE TESTING book of Jonathan Kohl that inspired this post.

Performance Testing checklist

I read a few days ago the Performance Testing Checklist from the ministryoftesting.com site and remembered about an old intention of creating a checklist with elements of a load test.

My last performance testing project was probably 2 years ago so the following info comes from memory.

I only worked on load and performance testing for web apps and web services so far.

The checklist applies to both with minimal changes.

If you prefer a mindmap to a long checklist, this is the link to it.

ANALYSIS


  • What will be measured?
    • application (web pages) response time
    • throughput
    • server resource utilization
  • Pass/Fail criteria
    • PASS: system performance during performance test <= baseline system performance
    • FAIL
      • system crashes during the load test
      • system becomes un-responsive
      • system performance during load test > baseline system performance
  • Benchmark the system's performance under highest production load
    • web pages' response times --> from Google Analytics
    • server resource utilization --> from the Server Monitoring tools
    • highest production application load
      • busiest times of the app --> from Google Analytics
      • highest number of concurrent users --> from the web server logs

  • Define user scenarios

    • get the most used pages from Google Analytics
    • create a few user scenarios using the most used pages
    • identify parameters of each page
    • create test data for the parameters of each page

  • Issues to be monitored
    • application becomes not responsive
    • application generates lots of errors under load
    • the servers resources are heavily used
    • application's response time is very high
  • Performance testing environment isolation from production
    • confirm that none of the components of the performance testing environment are used in production



PREPARATION


  • Prepare test data
    • Extract data from the production environment
    • Modify it for the performance test
    • Generate enough test data so it will not be re-used fast
  • Performance test environment
    • Application Environment
      • Application Servers
        • Web Servers
        • Database Servers  
        • Media Servers
        • Search Servers
      • Networking devices
        • Load Balancer
        • Firewall
    • Performance Testing Environment
      • Performance Testing Servers
        • Management Server
        • Agent Servers
  • Application Server Monitoring
    • Web Servers - Performance Counters
      • CPU
      • Memory
      • Disk
      • Network
      • Operating System
      • Web Server
      • Application (.NET)
    • Database Servers - Performance Counters
      • CPU
      • Memory
      • Disk
      • Network
      • Operating System
      • DB counters
  • Application stability before the performance testing
    • Run complete functional testing of the app
    • Confirm that no critical issues are present
  • Create Scripts for the performance testing user scenarios
    • Wait time between scripts steps
      • random times
      • script recording times
    • Create transactions for script steps
    • Create parameters in the scripts and connect them to the test data
    • Synchronize users in the script
  • Remove any 3rd party dependencies' impact on the application
    • 3rd party dependencies
    • banners
    • partner sites
    • google analytics
    • point all the 3rd party domains to 127.0.0.1
  • Identify browsers and bandwidth types to be used for scripts
  • Set up the model of loading users (user ramping)


EXECUTION


  • Start monitoring of the application environment
  • Start the performance test
    • Let the performance test go for a long time
    • Stop the performance test
  • Collect results
    • Performance testing tool reports
    • Server performance counters
    • Web logs
  • Analyze the results by comparing them with the baseline info
  • Report issues
  • Fix issues
    • Make changes in the application environment
    • Fix code bugs
    • Fine tune the performance testing scripts
  • Repeat the performance test






T-SHAPE testers




There are 2 ways of finding your way professionally.

This is not valid only for testing and software development but for other professions as well.

The first way is to find something that you like, a specific type of job, learn this job as best as you can and do it as best as you can.

A person going this way would be a specialist, someone who goes deep into the core of job and gains lots of knowledge about it.

Focusing so much on a specific job comes with the disadvantage of not knowing anything else.

If the job demand or the job requirements on the market change as well, the person will go through some tough times learning a new job.

The second way is the way of the generalist, someone who has knowledge from different job types but without knowing anything too well.

This type of person navigates the job market easier and maybe finds jobs faster.

He might have though difficulty keeping the jobs as without knowing a job very well, he does not contribute much.

Both the specialist and the generalist types have positive and negative aspects so it is difficult to choose one.

Maybe a better way is to become a specialist with a generalist background, also called a T-SHAPE person.

The top part of the T stands for the generalist background and the I part for the specialization in one job.

I had the suspicion that this is the best way of developing a career for a while now, even before immigrating to Canada.

I heard about the T-SHAPE person for the first time from the Valve employee handbook (http://www.valvesoftware.com/jobs/) mentioned also in the Scott Berkun's book about Wordpress.com.

So, what is so good about being a T-SHAPE person in your career?

See below:

- you are a great contributor because of your specialization but can also collaborate easily with other groups

- navigate easier the complex, always changing and uncertain job market as multiple job types are possible

- more prone on learning new things

- by continuously learning relevant skills, you stay meaningfully employed

- you have a better ability to interpret, sort and use the hug volume of available information

- more prone on generating new ideas, think laterally and cross-polinating across disciplines

How does this apply to a tester?

See below 2 possible ways of being a T-SHAPE tester:




Which type of tester do you want to be now?

Become a technical tester while learning JMETER

I watched yesterday a few videos about JMETER from this site: http://blazemeter.com/blog/jmeter-tutorial-video-series.

I heard about JMETER before as the free web site load testing tool but never used it.

My experience includes working with Load Runner for a couple of years for performance and load testing so I thought that there is not much to learn from a free tool.

After watching the videos, I am still convinced that the difference between the paid tool (Load Runner) and the free one (JMETER) is big since the sophistication of Load Runner is high.

But, JMETER can work pretty well in many cases too.

I am digressing from what I had in mind .....

After watching the videos, I realized that learning to use JMETER can have a very nice side effect for a tester.

There are many things that have to be mastered before creating and running a successful load test:

- regular expressions: important for finding values in the web page content

- web proxy: a web proxy is being used by JMETER to record a web script

- creating scripts: after the web proxy generates the script, the script needs to be modified so that the script parameters can be "parameterized" (not great English, I know; what i mean is that the script should be able to use multiple values for the same parameter; imagine that a web page has a category parameter; the script should be able to use sports, business, home and decor as possible values for the category parameter); the "parameterization" is done usually through external data files

- find, save and re-use browser session variables: the session variable is generated usually as soon as the user opens the site and all other site pages need it; this uses regular expressions most of the time

- assertions: assertions will verify that the page returned for a web request is actually the correct page; this is done by checking for information in the web page content

- set up load test configuration info: to do this, a basic understanding of HTTP requests, HTTP headers, cookies, cache is important

- listeners: the listeners provide the results for each web script after the script is executed

- results reports: these reports include information like throughput, samples, min, max, average, etc

- server concepts: to run a load test, multiple computers will be needed: some of them will just send the requests to the target system (slaves), then you have the master server which controls the load test and the target system which hosts the application under test

So far, I just summarized some concepts that can be easily found in the JMETER user guide.

Learning JMETER will not be easy or quick especially for a tester that does not have a technical background and has no previous exposure to load tests and load testing tools.

What is the second benefit, however, of learning JMETER?

Learning about all key concepts for JMETER will improve the technical knowledge of the tester a great deal.

Technical knowledge is very handy when deciding to start on test automation or even on security testing.

It also allows the technical tester to do much more than just black-box functional testing.