Generate alternatives in testing

Related article:
Login Page testing mind map

The following images are taken from Edward De Bono's great book on lateral thinking.

They are an invitation on reading the book for learning how to think differently when testing.

So, the problem offered is the following: given a square, find as many ways as possible to divide it in 4 equal parts.

This is the square:

Before looking at the answers, try to come up by yourself with as many solutions as possible to dividing the square in 4 equal parts.

The next solutions are most interesting especially after you tried this exercise by yourself.

Good luck!!!

Scroll down in the page when done :)

The first easy way of dividing the square is this:

Another easy way follows:

Less obvious 2 solutions (black and blue) are:

Even less obvious 4 solutions (black, red, green, blue) are:

You think this is it, right?

More solutions are displayed below.

Keep scrolling ...................

And, yes, there is at least one more solution using circle arcs, which I cannot create because of lack of graphical skills. You will have to check the book for it but it is similar to the last solution.

Now, why is this important to testing?

It shows that no matter how you, as a tester, are testing any application functionality, there are many other ways that the same functionality can be tested by.

The book referred above explains a different and creative way of thinking, the lateral thinking.

Any tester interested in improving his skills should read it in my opinion.


PS: Sorry if the square actually looks like a rectangle, I noticed this mistake too late.

80-20 rule applied to testing

Everybody heard about the 80-20 rule that says that 80% of the results are coming from 20% of the subjects.

This can be applied to any field as follows:

- 80% of the revenue of a company is coming from 20% of the clients

- 80% of the donations for a charity are coming from 20% of the people

- 80% of the books from a bookstore are purchased by 20% of the clients

For software, this could mean that

- 80% of the clients are using 20% of the functionality

- 80% of the bugs are caused by 20% of the functionality

I used to think that this is how things are as the rule is too attractive in its common sense and simplicity.

The problem is that when you investigate it a little bit, things are becoming a little more complicated.

Joel Spolsky has the following opinion on the topic in this article:

A lot of software developers are seduced by the old "80/20" rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.
Unfortunately, it's never the same 20%. Everybody uses a different set of features. In the last 10 years I have probably heard of dozens of companies who, determined not to learn from each other, tried to release "lite" word processors that only implement 20% of the features.
This story is as old as the PC. Most of the time, what happens is that they give their program to a journalist to review, and the journalist reviews it by writing their review using the new word processor, and then the journalist tries to find the "word count" feature which they need because most journalists have precise word count requirements, and it's not there, because it's in the "80% that nobody uses," and the journalist ends up writing a story that attempts to claim simultaneously that lite programs are good, bloat is bad, and I can't use this damn thing 'cause it won't count my words. If I had a dollar for every time this has happened I would be very happy.
When you start marketing your "lite" product, and you tell people, "hey, it's lite, only 1MB," they tend to be very happy, then they ask you if it has their crucial feature, and it doesn't, so they don't buy your product.
Bottom line: if your strategy is "80/20", you're going to have trouble selling software. That's just reality. This strategy is as old as the software industry itself and it just doesn't pay; what's surprising is how many executives at fast companies think that it's going to work.
How does this apply to testing?

Well, the project release date is fixed so you cannot test everything well.

So, test only 20% of the application as this is what the majority of the users will use.

Select the 20% of the application's functionalities that have the highest risk and test them well.

Test the remaining 80% of the functionalities by just taking the happy paths.

You think you did a good job, the project manager is happy with the results.

And after the release, the support team receives lots of issues from the clients about the 80% of the application not tested well.

More, the senior management of the company starts noticing problems all over the application too.

The solution is, of course, applying an endless number of patches with bug fixes for the issues discovered by the customers, frustrating the customers as much as possible and wasting as much time as possible for both the development and testing team.

How familiar is this scenario?

Classic Testing Mistakes

This is too good not to be published here for easy future reference: Classic testing mistakes re-visited.

For anyone who is interested in reading the original article (by Brian Marick), this is the link.

This is the original list of classic testing mistakes:

The role of testing

 ·  Thinking the testing team is responsible for assuring quality.
 ·  Thinking that the purpose of testing is to find bugs.
 ·  Not finding the important bugs.
 ·  Not reporting usability problems.
 ·  No focus on an estimate of quality (and on the quality of that estimate).
 ·  Reporting bug data without putting it into context.
 ·  Starting testing too late (bug detection, not bug reduction)

Planning the complete testing effort

 ·  A testing effort biased toward functional testing.
 ·  Underemphasizing configuration testing.
 ·  Putting stress and load testing off to the last minute.
 ·  Not testing the documentation
 ·  Not testing installation procedures.
 ·  An overreliance on beta testing.
 ·  Finishing one testing task before moving on to the next.
 ·  Failing to correctly identify risky areas.
 ·  Sticking stubbornly to the test plan.

Personnel issues

 ·  Using testing as a transitional job for new programmers.
 ·  Recruiting testers from the ranks of failed programmers.
 ·  Testers are not domain experts.
 ·  Not seeking candidates from the customer service staff or technical writing staff.
 ·  Insisting that testers be able to program.
 ·  A testing team that lacks diversity.
 ·  A physical separation between developers and testers.
 ·  Believing that programmers can’t test their own code.
 ·  Programmers are neither trained nor motivated to test.

The tester at work

 ·  Paying more attention to running tests than to designing them.
 ·  Unreviewed test designs.
 ·  Being too specific about test inputs and procedures.
 ·  Not noticing and exploring “irrelevant” oddities.
 ·  Checking that the product does what it’s supposed to do, but not that it doesn’t do
 what it isn’t supposed to do.
 ·  Test suites that are understandable only by their owners.
 ·  Testing only through the user-visible interface.
 ·  Poor bug reporting.
 ·  Adding only regression tests when bugs are found.
 ·  Failing to take notes for the next testing effort.

Test automation

 ·  Attempting to automate all tests.
 ·  Expecting to rerun manual tests.
 ·  Using GUI capture/replay tools to reduce test creation cost.
 ·  Expecting regression tests to find a high proportion of new bugs.

Code coverage

 ·  Embracing code coverage with the devotion that only simple numbers can inspire.
 ·  Removing tests from a regression test suite just because they don’t add coverage.
 ·  Using coverage as a performance goal for testers.
 ·  Abandoning coverage entirely.