Web Testing Exploratory Tours

This is the list of exploratory testing tours that James Whittaker (former Google QA director, working for Microsoft again currently) has in his book on exploratory testing:

The Guidebook tour

Follow the user manual’s advice just like the wary traveler, by never deviating from its lead.

The money tour

For exploratory testers finding the money features leads directly to the sales force. Sales folk spend a great deal of time giving demos of applications and are a fantastic source of  information for the Money tour. To execute the tour, simply run through the demos yourself and look for problems.

The landmark tour

Choose a set of landmarks, decide on an ordering for them,  and then explore the application going from landmark to  landmark until you’ve visited all of them in your list. Keep track  of which landmarks you’ve used and create a landmark  coverage map to track your progress.

The intellectual tour

This tour takes on the approach of asking the software hard questions. How do we make the software work as  hard as possible? Which features will stretch it to its limits? What inputs and data will cause it to perform the most  processing? Which inputs might fool its error-checking  routines? Which inputs and internal data will stress  its capability to produce any specific output?

The fedex tour

During this tour, a tester must concentrate on this data. Try to identify inputs that are stored and “follow” them  around the software.

The garbage collector's tour

This is like a methodical spot check. We can decide to spot check the interface where we go screen by screen, dialog by dialog (favoring, like the garbage collector, the shortest route), and not stopping to test in detail, but checking the obvious things.

The bad neighbourhood tour

As bugs are found and reported, we can connect certain features with bug counts 
and can track where bugs are occurring on our product. Because bugs tend to congregate, 
revisiting buggy sections of the product is a tour worth taking. Indeed, once a buggy section of 
code is identified, it is recommended to take a Garbage Collector’s tour through nearby features to verify that the fixes didn’t introduce any new bugs.

The museum tour

Software’s antiquities are legacy code. Older code files that undergo revision or that are put into a new environment tend to be failure prone. With the original developers long gone and documentation  
often poor, legacy code is hard to modify,  hard to review, and evades the unit testing  
net of developers (who usually write such  tests only for new code). During this tour,  testers should identify older code and  executable artifacts and ensure they receive a fair share of testing attention.

The back alley tour

Test the least likely features to be  used and the ones that are the least  attractive to users. If your organization  tracks feature usage, this tour will direct  you to test the ones at the bottom of   
the list. If your organization tracks code  coverage, this tour implores you to find   
ways to test the code yet to be covered.

The all nighter tour

Exploratory testers on the All-Nighter tour will keep their  application running without closing it. They will open files  and not close them. Often, they don’t even bother saving  them so as to avoid any potential resetting effect that might occur at save time. They connect to remote resources and never disconnect. And while all these resources are in constant use, they may even run tests using other tours to keep the software working and moving data around.  If they do this long enough, they may find bugs that other  testers will not find because the software is denied that clean reset that occurs when it is restarted.

The supermodel tour

During the Supermodel tour, the focus is not on functionality or real interaction. It’s only on the interface. Take the tour and watch the interface elements. Do they look good? Do they render properly, and is the performance good? As you make changes, does the GUI refresh properly? Does it do so correctly or are there unsightly artifacts left on the screen? If the software is using color in a way to convey some meaning,     
is this done consistently? Are the GUI panels internally consistent with buttons and controls where you would expect them to be? Does the interface violate any conventions or standards?

The couch potato tour

A Coach Potato tour means doing as little actual work as possible. This means accepting all default values, leaving input fields blank, filling in as little form data as possible, never clicking on an advertisement, paging through screens without clicking any buttons or entering any data, and so forth. If there is any      
choice to go one way in the application or another, the coach potato always takes the path of least resistance.

The obsessive compulsive tour

OCD testers will enter the same input over  and over. They will perform the same action  over and over. They will repeat, redo, copy,  paste, borrow, and then do all that some  more. Mostly, the name of the game is repetition. Order an item on a shopping site and then order it again to check if a multiple purchase discount applies. Enter some data on a screen, then return immediately to enter it again. These are actions developers often don’t program error cases for. They can wreak significant havoc.

More details on these tours can be found on this article from James Whittaker's blog and in his exploratory testing book.

Mobile Testing Tours

There are a few different ways of doing exploratory testing:

  1. test the application using end-to-end scripted user scenarios and do exploratory testing along the way; the exploratory testing adds variation to the end-to-end scenarios
  2. do exploratory testing on the application components
  3. use testing tours
These 3 different approaches are not excluding each other and offer different perspectives to the tester.

For the testing tours, the focus is not so much on the structure of the application but on the intention of the tester. 

The tester will choose a mix of application features and then combine them in a way that matches his testing intention.

The following tours are applicable for testing mobile applications:

Gesture tour

  • On every screen of the application and every object, try each of the possible gestures
        1. Double Tap
        2. Tap
        3. Press
        4. Press and Drag
        5. Swipe
        6. Flick
        7. Shake
        8. Rotate
        9. Pinch
        10. Spread

Orientation tour

  • Work through each screen in portrait, then turn the device on its side and try to repeat in landscape mode. 
  • Change orientation on each screen. Try in portrait, rotate, try in landscape, then rotate back to portrait, move to the next screen.

Change Your Mind tour

  • Go Back a Screen - use the app back functionality or the back button of the device (if any)
  • Get Me Out - try to go to the beginning of the app in the middle of a user flow
  • Cancel! - Try to cancel the current user flow
  • Close The App and Re-open
  • Force App to Background, Reopen - After being re-opened, the app should be in the same state as before being put in the background

Accessories tour

  • Connect physical accessories while using the app (headphones, connect device to pc, keyboard, use a stylus, etc)
  • Use wireless accessories while using the app

Motion tour
  • Move the arm holding the device in different ways during testing the app

Light tour
  • Use the app with different light conditions, both inside and outside

Location tour

  • Use the app while moving around
  • Use the app close to tall buildings
  • Use the app in different weather conditions
  • Use the app inside different building types

Connectivity tour
  • Use the app with different types of connections;  move from one type of connection to another:
    • Strong wifi (3 or 4 bars) to weaker (1 bar)
    • Move from Wifi to a cellular network
    • Move from a cellular network to wifi
    • From one wifi source to another
    • From connected to no connection (moving into a dead spot)
    • Move from a dead spot to a wireless network connection

Weather tour
  • Try the application outside, in different weather conditions.

Comparison tour
  • Compare the app between 2 different devices with the same operating system
  • Compare the app between devices with different operating systems

Sound tour

Combination tour
  • Cause multiple technologies to work together on the device:
    • Move the device while interacting with the application using gestures or inputting text, causing touch screen sensors, gesture interpretation and movement sensors to work together.
    • Repeat the above, such as gestures or text inputs or web requests while moving between network connection types.
    • Use the device while doing something else (watching TV, walking)

Consistency tour
  • Check if all screens and app features are consistent with the recommended design practices for that particular OS

User tour
  • Use the app from the perspective of different types of users

Lounge tour
  • Try the application while lying down on a couch, or a comfortable chair.
  • Use the application while lying in bed.

Low battery tour
  • Use the app when the battery is low
  • Use the app while the battery is being charged

Temperature tour
  • Use the app both in high temperature and low temperature

Multi Screen tour

Pressed For Resources tour
  • Use the app when the device doesnt have sufficient resources:
    • weak wifi
    • bad wifi
    • low battery
    • with lots of apps in the background
    • low space on the storage card
    • when the cpu is high

Emotions tour

I am in a rush tour
  • Use the app as fast as possible; 
  • Complete user flows as quickly as you can

Slow As A Snail tour
  • Use the app at a very slow pace; take a few seconds breaks between actions; let the device lock, then unlock it and continue; dont use the device for a few minutes and then continue

More details for these tours can be found in TAP INTO MOBILE TESTING book of Jonathan Kohl that inspired this post.

Performance Testing checklist

I read a few days ago the Performance Testing Checklist from the ministryoftesting.com site and remembered about an old intention of creating a checklist with elements of a load test.

My last performance testing project was probably 2 years ago so the following info comes from memory.

I only worked on load and performance testing for web apps and web services so far.

The checklist applies to both with minimal changes.

If you prefer a mindmap to a long checklist, this is the link to it.


  • What will be measured?
    • application (web pages) response time
    • throughput
    • server resource utilization
  • Pass/Fail criteria
    • PASS: system performance during performance test <= baseline system performance
    • FAIL
      • system crashes during the load test
      • system becomes un-responsive
      • system performance during load test > baseline system performance
  • Benchmark the system's performance under highest production load
    • web pages' response times --> from Google Analytics
    • server resource utilization --> from the Server Monitoring tools
    • highest production application load
      • busiest times of the app --> from Google Analytics
      • highest number of concurrent users --> from the web server logs

  • Define user scenarios

    • get the most used pages from Google Analytics
    • create a few user scenarios using the most used pages
    • identify parameters of each page
    • create test data for the parameters of each page

  • Issues to be monitored
    • application becomes not responsive
    • application generates lots of errors under load
    • the servers resources are heavily used
    • application's response time is very high
  • Performance testing environment isolation from production
    • confirm that none of the components of the performance testing environment are used in production


  • Prepare test data
    • Extract data from the production environment
    • Modify it for the performance test
    • Generate enough test data so it will not be re-used fast
  • Performance test environment
    • Application Environment
      • Application Servers
        • Web Servers
        • Database Servers  
        • Media Servers
        • Search Servers
      • Networking devices
        • Load Balancer
        • Firewall
    • Performance Testing Environment
      • Performance Testing Servers
        • Management Server
        • Agent Servers
  • Application Server Monitoring
    • Web Servers - Performance Counters
      • CPU
      • Memory
      • Disk
      • Network
      • Operating System
      • Web Server
      • Application (.NET)
    • Database Servers - Performance Counters
      • CPU
      • Memory
      • Disk
      • Network
      • Operating System
      • DB counters
  • Application stability before the performance testing
    • Run complete functional testing of the app
    • Confirm that no critical issues are present
  • Create Scripts for the performance testing user scenarios
    • Wait time between scripts steps
      • random times
      • script recording times
    • Create transactions for script steps
    • Create parameters in the scripts and connect them to the test data
    • Synchronize users in the script
  • Remove any 3rd party dependencies' impact on the application
    • 3rd party dependencies
    • banners
    • partner sites
    • google analytics
    • point all the 3rd party domains to
  • Identify browsers and bandwidth types to be used for scripts
  • Set up the model of loading users (user ramping)


  • Start monitoring of the application environment
  • Start the performance test
    • Let the performance test go for a long time
    • Stop the performance test
  • Collect results
    • Performance testing tool reports
    • Server performance counters
    • Web logs
  • Analyze the results by comparing them with the baseline info
  • Report issues
  • Fix issues
    • Make changes in the application environment
    • Fix code bugs
    • Fine tune the performance testing scripts
  • Repeat the performance test

T-SHAPE testers

There are 2 ways of finding your way professionally.

This is not valid only for testing and software development but for other professions as well.

The first way is to find something that you like, a specific type of job, learn this job as best as you can and do it as best as you can.

A person going this way would be a specialist, someone who goes deep into the core of job and gains lots of knowledge about it.

Focusing so much on a specific job comes with the disadvantage of not knowing anything else.

If the job demand or the job requirements on the market change as well, the person will go through some tough times learning a new job.

The second way is the way of the generalist, someone who has knowledge from different job types but without knowing anything too well.

This type of person navigates the job market easier and maybe finds jobs faster.

He might have though difficulty keeping the jobs as without knowing a job very well, he does not contribute much.

Both the specialist and the generalist types have positive and negative aspects so it is difficult to choose one.

Maybe a better way is to become a specialist with a generalist background, also called a T-SHAPE person.

The top part of the T stands for the generalist background and the I part for the specialization in one job.

I had the suspicion that this is the best way of developing a career for a while now, even before immigrating to Canada.

I heard about the T-SHAPE person for the first time from the Valve employee handbook (http://www.valvesoftware.com/jobs/) mentioned also in the Scott Berkun's book about Wordpress.com.

So, what is so good about being a T-SHAPE person in your career?

See below:

- you are a great contributor because of your specialization but can also collaborate easily with other groups

- navigate easier the complex, always changing and uncertain job market as multiple job types are possible

- more prone on learning new things

- by continuously learning relevant skills, you stay meaningfully employed

- you have a better ability to interpret, sort and use the hug volume of available information

- more prone on generating new ideas, think laterally and cross-polinating across disciplines

How does this apply to a tester?

See below 2 possible ways of being a T-SHAPE tester:

Which type of tester do you want to be now?