How to test dynamic reports - part 2

There is more to say about testing dynamic reports.

In part 1, the focus was on describing how manual testing is done.

But sometimes, more than manual testing is possible, is it not?

Let's have another look at the same report dashboard.

Mousing over a part of a sub-report will display a popup with various info:

As discussed in part 1, you will need a stored procedure or more (with parameters) to verify all information from the popup.

But is it possible to automate this testing?

A closer look at the report shows that the report data can be exported in a cross tab format (CSV):

The export file is very simple (the highlighting is mine):

What about creating a Java app that 

- reads the content of the CSV file and stores it in some sort of a list

- runs the stored procedures for all elements of the report dashboard and stores the results in another list

- compares the results of the 2 lists (one from CSV, the second from stored procedures)

Is there a better way for regression testing of dynamic reports?

How to test dynamic reports

Reports testing is something that I did rarely in my career.

I validated simple reports here and there but the work was always short and easy.

The reports were simple, displayed data in a tabular format, with some sorting, filtering and grouping.

The validation consisted in checking that

- the report displays the correct data; I would do this by cross checking the report data with the data from an existing app

- the report's details are correct: filtering, sorting, grouping

A recent reports testing assignment was much different.

The reports were far from simple in this case.

Look, for example, at the Retail Site Openings report from this page:

This is the report content:

This is not a simple report but a dashboard with 4 sub-reports.

Play around with the dashboard a little bit and you will see that clicking parts of every sub-report updates the content in the remaining sub-reports.

The same happens when changing the year or selecting categories.

Even more, multiple selections are possible in the bottom tree map reports.

How do you test this type of report?

Before discussing how to test the report, lets see how the report is built.

It is made with Tableau Desktop and published on a website.

Its data comes from an OLTP database populated through the company's main system.

The OLTP database is first transformed into a simpler database through ETL packages.

The simpler database is then used as the source of an OLAP cube.

The report uses the OLAP cube's data. 

The cube does not include the raw OLTP data but data aggregations (sums, counts, etc).

How do you test this type of report?

It is difficult to cross check the report data using the main OLTP system (the main OLTP system has raw data while the OLAP cube has data aggregations).

How else can the reports data be validated?

The answer is through SQL queries executed on the OLTP database.

If the SQL queries' results match the reports data, this means that:

- the relational warehouse was created correctly from the OLTP database

- the OLAP cube was created correctly from the relational warehouse

- the dashboards were created correctly from the OLAP cube

Creating the queries is not complicated as soon as the database schema is understood.

But with so many reports in a dashboard, interactions between the reports and caused by filters, the number of SQL queries can be high.

Running the SQL queries is not easy since the parameters values have to be changed every time and the SQL queries are difficult to maintain.

To solve all these problems (not easy to run, difficult to maintain, high number of queries), the SQL queries can be  changed into stored procedures with multiple parameters.

The stored procedures resolve all existing issues:

- simplify the execution since only the stored procedure header (name and parameter values) is needed.

- the maintenance is easy as the code is stored in one place.

- one stored procedure replaces a few SQL queries.

The stored procedures come with performance problems because of the execution plans.

In some situations, running a SQL script returns results in 10 seconds but running the same script as a stored procedure takes 30 mins.

What can be done in this case?

I tried multiple things like dynamic where clauses, parameter sniffing, etc.

Nothing helped.

The only idea that worked was to build the SQL script in each stored procedure based on the values of the parameters and then execute it.

See below a sample stored procedure and the SQL script used for executing it.

ALTER PROCEDURE [dbo].[getResultCountForReport1] 

@Code int
, @Layer varchar(50)
, @SubLayer varchar(50)
, @Filter1 varchar(50)
, @Filter2 varchar(3)
, @Filter3 as integer
, @Filter4 as integer
, @ResultValue as integer OUTPUT


DECLARE @LayerCode integer, @SubLayerCode integer       

IF (@Layer <> 'All')

SET @LayerCode = dbo.getLayerCode(@Code, @Layer)

SET @SubLayerCode = dbo.getSubLayerCode(@Code, @Layer, @SubLayer)


DECLARE @SqlQuery nvarchar(4000)
SET @SqlQuery = 

WHEN xxxxxx = ''abcdef'' THEN 1 
WHEN xxxxxx = ''ghijkl''   THEN 2 

 FROM xxxxxxxx AS www

 LEFT JOIN xxxxx AS www   ON xxxxxx = xxxxxxx
 LEFT JOIN xxxxx    AS www   ON xxxxxx = xxxxxxx
 LEFT JOIN xxxxx AS www   ON xxxxxx = xxxxxxx
 LEFT JOIN xxxxx AS www   ON xxxxxx = xxxxxxx
 LEFT join xxxxx AS www ON xxxxxx = xxxxxxx

 WHERE xxxxxxxxxx = ''AAA'' AND  '

IF @Code <> 0

IF @Code = 123      

SET @SqlQuery = @SqlQuery + ' add WHERE clause '
IF (@Layer <> 'All')
SET @SqlQuery = @SqlQuery + ' add WHERE clause AND '



IF @Code = 456

 SET @SqlQuery = @SqlQuery + ' add WHERE clause AND '      
 IF (@Layer <> 'All')

SET @SqlQuery = @SqlQuery + ' add WHERE clause AND ' 

SET @SqlQuery = @SqlQuery +

'    mmmmmmmmmmmmm = ''11111'' 
AND mmmmmmmmmmmmm IN (1, 2, 3)        
AND mmmmmmmmmmmmm = 1  
AND mmmmmmmmmmmmm IS NOT NULL  
AND xxxxxxxxxx.Date between @Filter3 and @Filter4 '

IF @Filter1 != 'All'
SET @SqlQuery = @SqlQuery + ' AND xxxxxxx = @Filter1  '

IF @Filter2 != 'All' 
SET @SqlQuery = @SqlQuery + ' AND xxxxxxx = @Filter2 '

DECLARE @ResultTable TABLE (countvalue int)

INSERT INTO @ResultTable (countvalue)
EXEC sp_executesql @SqlQuery, 
                               N'@Code int, 
                                  @Layer varchar(50), 
                                  @SubLayer varchar(50), 
          @Filter1 varchar(50), 
                                  @Filter2 varchar(3), 
                                  @Filter3 integer, 
                                  @Filter4 integer',

SET @ResultValue =  (SELECT countvalue FROM @ResultTable)


DECLARE @Code int = 11

DECLARE @Layer varchar(50)        = 'layer1', 
                  @SubLayer varchar(50)  = 'layer2'

DECLARE @Filter1 varchar(50) = 'value1', 
                  @Filter2 varchar(3)   = 'value2', 
                  @Filter3 integer        = 33,
                  @Filter4 integer        = 44

DECLARE @ResultValue integer

EXEC getResultCountForReport1  @Code, 
                                                        @ResultValue OUTPUT

SELECT @ResultValue

So, to summarize, for effective reports testing, the following are needed:

- better than average SQL scripting knowledge

- good understanding of the OLTP database schema

- SQL scripts for verifying each element of the report

- change the SQL scripts into stored procedures

- build the SQL scripts in the stored procedure and then execute it

How to build your test automation framework

Foundation Framework

A Foundation Framework is built prior to any application that are built on top of it. 

The idea is that you analyse the needs of the various applications that need the framework, then you build the framework. 

Once the framework is complete you then build applications on top of it. 
The point is that the framework really needs to have a stable API before you start work on the applications, otherwise changes to the framework will be hard to manage due to their knock-on effects with the applications.

While this sounds reasonable in theory, I've always seen this work badly in practice. 

The problem is that it's very hard to understand the real needs of the framework. 

As a result the framework ends up with far more capabilities that are really needed. 

Often its capabilities don't really match what that the applications really need.

Harvested Framework

To build a framework by harvesting, you start by not trying to build a framework, but by building an application. 

While you build the application you don't try to develop generic code, but you do work hard to build a well-factored and well designed application.

With one application built, you then build another application which has at least some similar needs to the first one. 

While you do this, you pay attention to any duplication between the second and first application. As you find duplication, you factor out into a common area, this common area is the proto-framework.

As you develop further applications each one further refines the framework area of the code. 

During the first couple of applications you'd keep everything in a single code base. 

After a few rounds of this the framework should begin to stabilize and you can separate out the code bases.

While this sounds harder and less efficient than Foundation Framework, it seems to work better in practice.

Use simple and easy to understand regular expressions in your code

I used regular expressions a few times in my test automation code and each time they were not easy to understand by other people.

It is possible to make them readable as explained in Martin Fowler's article on REGEX.

For this text (score 400 for 2 nights at Minas Tirith Airport), instead of using 

const string pattern = 


private String composePattern(params String[] arg) {
      return "^" + String.Join(@"\s+", arg);
const string numberOfPoints = @"(\d+)";
const string numberOfNights = @"(\d+)";
const string hotelName          = @"(.*)";

const string pattern =  composePattern("score", numberOfPoints, 
      "for", numberOfNights, "nights?", "at", hotelName);

Instead of having a long and complicated expression, have a few simple, short expressions that are joined together into a long regular expression.

Store your test automation code in a source control system (Subversive)

One thing that you should use as soon as your test automation work gets serious is a source control system for your code.

You can use the source control system as a repository so that you check out the code when adding changes, commit the changes back, look at different versions of the code, even revert back to a previous version.

I will present in this post the Subversion plug-in for Eclipse, how to install and use it.

First, you need to install it.

Open Eclipse, click on the HELP menu and then on the Install New Software menu option.

In the Install window, click the Available Software Sites link.

Copy the link of an Available Software Site from the Preferences window.

Paste the link in the WORK WITH field of the Install window:

Expand the Collaboration section:

Check all Subversive apps and complete the installation process.
You may need to restart Eclipse a few times during the installation.

Install the Subversive Connectors through the same process (Install window) using a different update site: Polarion -

Next, click on the SVN Repository Exploring perspective.

Then, click on the Window menu option, Show View, SVN Repositories.

The SVN Repositories panel should be displayed.

To create a new repository, right click in the panel, select New and then Repository:

In the Create Repository window. click the Browse button and select an empty folder from your drive. Click OK to complete the creation of the new repository.

You will know that the new repository is created by checking the repository folder in Explorer.

The folder should have a few sub-folders and files used by Subversion to manage the code source control.

Even if the repository is created, it is not displayed yet in Eclipse, in the SVN Repositories panel.

To add it, right click again, select New and Repository Location.

Add the folder location of the repository and complete the process.

The repository should be displayed now in the SVN Repositories panel.

Add the Project Structure to the repository by right clicking on the repository, select New, select Project Structure. The project structure is added to the repository (project folder, truck, tags, branches):

To import an existing test automation project in the repository, right click on the repository, click Import, then select the test automation project:

You should be able to see the content of the test automation project and all its files in the repository.

Right click on the project folder and select Check Out to get the code:

After switching the perspective to Java, the checked out project is displayed in the Navigator panel.

Double click on a class to open it in the editor. You will notice that the status of the class is writable in the document status bar.

All source control functions can be accessed for the selected class file in the TEAM menu:

You can commit your changes to the repository using Commit:

You can look at the history of making changes to a document:

And you can compare different versions of the same document side by side:

Youtube video:

Interview Tips

1. Prepare for the interview by studying the company and more important, the products of the company.

Imagine that you are already working for the company.

How would  you test their product?

What testing strategy would  you take?

What testing types would you use?

What would be challenging for testing that product?

How would you test creatively the product?

Prepare a short presentation that highlights your preparation to test the company's product.

Attach a mind map that highlights the testing strategy and planning

2. If possible, get details on the interview's agenda, the type of the interview and who will attend it.

Learn more info on the persons that will attend the interview from their Linkedin profiles.

Prepare for the type of interview if it is unusual.

3.  Attend only interviews for positions that you are really interested in, that really excite you. 

If the position is exciting, your level of enthusiasm and motivation will be high and this is important for the client to see.

Be confident that you can do the work.

4. Be prepared to answer any questions on your resume.

Think ahead of time of possible questions that you can be asked.

5. Make sure that you answered the question.  

It can be helpful to even ask “Did I answer your question”.

6. Be concise

Answer the question but do not deviate from the topic.

Avoid talking too much.

7.  If you don't have an answer for a question, don't try to guess. 

It is better to say "I dont know" instead of guessing.

8. Answer questions using the STAR method (situation, task, action, result)

Provide examples from your work by telling short stories that highlight the situation, task, action and result.

9. Do what you can to get the job but be relaxed about the outcome of the interview.

The hiring decision is out of your control and you may not get the job even in your best day.

Try to have fun at the interview and learn as much as possible from it, especially on things that you do not know yet in your profession.

10.  Ask good questions about the job and company. 

It is important for you to learn as much as possible about the position before being hired:

- tell me more about what you are looking for from the successful candidate

- tell me more about the current and future projects

- why the position is available

- what is challenging for this position

- what are the expectations of your manager

- what is the work style in the company

- what are the day-to-day responsibilities for this role

11. During the interview, do not rush with the answers. 

You can think for a minute or two before answering difficult questions.
12. Be prepared to explain why they should give you job. 

Why are you the best fit? 

Why should you be hired and not someone else?

What do you bring new to the client?

13. Be prepared to explain why you want to work for the specific company and not for other companies

14. Know your short term and long term plans, make sure they align with the ones of the company

15. When the interview is attended by people with different positions, adapt your answers to the position/level of your interlocutor.

When talking with someone who has a director position, there is no need to mention very technical terms as they may not be familiar with the terms.

16. Try to go to the interview early and as relaxed as possible.

Exercise a bit before the interview or take a walk.

17. At the end of the interview, express your interest in the job and ask for next steps

18. Do not discuss salary expectations in the first interview. Wait until you get to the short list of candidates. The client will make the proper offer to the right candidate

How do you determine if a test case should be automated or only tested manually?


What criteria would you use to determine if a test case should be automated or only tested manually?

Here I would think about - how critical the feature is, - how complex the feature is, - time taken to automate, - time spent manual testing and frequency, - how repetitive the test is


Usually, test automation should be done only for functional tests.

Since test automation means creating code and since it takes time, you can only automate a small number of test cases.

Test automation should be the top of the pyramid where unit tests are at the bottom.

You should not automate negative tests as unit tests can do this better.

You should not automate GUI testing.

You should not automate low priority user scenarios.

So the answer is:

Automate test cases that are functional, have high priority (critical for the application), are on the user's happy path and corresponds to user scenarios that are very popular


Automate test cases that are used for a build's smoke test.


You mention only creating automated tests for the happy path, I've seen this stated a lot but rarely with reasons why.  

Is yours simply that the negative tests are easier to do in unit testing?  

You also mention to restrict it to functional tests,  by this I'm assuming you mean its covering a large path, such an end-to-end test for a user placing an order, whereas something testing only a small piece of code/isolated function should be covered only by unit tests?


The difference between positive and negative tests is that

- positive tests prove that the system works (if you use the happy path)

- the negative tests do not prove this

- there is 1 positive test for everything

- there are many negative tests

The difference between unit tests and the UI automated tests (Selenium, QTP, etc) is that

- unit tests are extremely fast; you can execute thousands in 1 hour

- UI automated tests are very slow as they go through the application (for a site, the browser needs to be opened, then the site, etc)

- the unit tests use the application framework components and are stable

- the UI automated tests use are brittle, break easily, use the application where things can change (change position of elements, ids of elements, etc)

So if the purpose of unit tests are to confirm that the components of the app are working correctly, then the unit test should cover all tests at the component level, including the negative ones.

The purpose of the test automation scripts is to confirm that the components fit well together and that the app works (for the happy path mostly).

Since these UI tests are slow and brittle and since the unit tests covered all component testing, you dont need to repeat yourself so just focus on functional, positive tests.

Using Config Elements in a JMETER test plan

The test plan will be very simple and consist in sending requests for the VPL.CA results page using different sort orders of results.

The typical URL of a results page of the site is


t parameter shows that the url corresponds to a keyword search and 
q parameter has the value of the search keyword

After sending this request to the server, if the page loads correctly, 10 or 25 results are loaded.

If the sort order is modified, the URL of the page becomes

Relevance sort order

Date Acquired



Published date

All sort urls start with the base URL ( and are different in the last part.

So, the JMETER project starts with creating the thread group and the http request:

The first Config Element to be used is HTTP REQUEST DEFAULTS element.

Any http request fields populated in this element can be empty in the actual request as they are "inherited" from the HTTP REQUEST. In our case, just the Server Name or IP is used:

Next, we will change the q parameter of a URL in a variable so that each request uses a different keyword value.

This is done by adding another config element, CSV DATA Set Config that reads the keyword values from a text file:

Each http request will get a value from the CSV Data Set Config through the keywordValue variable.

The http request will look a bit different now:

Next, the http request will be changed again so that the sort order is used.

First, I will add 2 new config elements to the plan: random variable and user defined variables.

The random number will have a value between 1 and 100 generated and stored in the randomNumber variable.

2 user defined variables are created as well:

I will use these config elements as follows:

if (randomNumber between 0 and 20)
sortOrder = "Relevance";

if (randomNumber between 20 and 40)
sortOrder = "acquired";

if (randomNumber between 40 and 60)
sortOrder = "Title";

if (randomNumber between 60 and 80)
sortOrder = "Author";

if (randomNumber between 80 and 100)
sortOrder = "Published";

A BeanShell sampler will determine the sort order. 
The BeanShell sampler will

- get the randomNumber variable value

- convert it from String to Integer

- set local variables with the sort order name and sort order parameters based on the random Number value

- set the user defined variables based on the local variables

- write the sort order name and the sort order parameters to the JMETER log

The http request will now have an updated value:


The http request needs a few assertions:

1. check that the number of books in the page is greater than 0

2. check that the selected sort order is the one defined in the bean shell sampler

I have also added an XPATH extractor to get the number of results in the page and another Bean Shell post-processor to display this value in the log:

It is time to run the project and look at the results:

The last 3 config elements to be added are:

HTTP Cache Manager

It allows clearing the cache for each iteration or setting the cache expiration settings.

HTTP Cookie Manager

It allows clearing the cookies after each iteration or setting cookies.

HTTP Header Manager

It allows changing the request headers so that the requests are sent using a specific browser info (Chrome).