Tips and Tricks for testing credit cards validation

1. The following test credit cards validate properly but are not actually in use (http://support.microsoft.com/kb/258255):

AMERICAN EXPRESS: 3111-1111-1111-1117
VISA - 4111-1111-1111-1111
MASTERCARD - 5111-1111-1111-1118
DISCOVER - 6111-1111-1111-1116

2. How to test over limit on a credit card?

Use a prepaid VISA gift card. Pick one for 25$, go over the limit, see what happens when the card is declined. This is great as there is no revealing of personal card information in the wild

Source: http://www.testthisblog.com

A Tester Is For Life - ebook with testers' thoughts

Excellent ebook with testers' opinions, this is the link.

Some of my favorite paragraphs are below.

It is essential to notice that everybody suggests more reading, more reading and more reading. It is not possible to evolve as a tester without lots of reading books, blogs, magazines and lots of testing. But just lots of testing is not enough .....................

Alex

What’s the biggest challenge facing Testing currently? 
According to me, the biggest challenge facing the testing industry is : Test cases replacing human brains. Management feels that test cases will solve the problem. If a defect is missed, add it to the test suite. The vicious circle continues. And finally, testers are bored


What’s the biggest challenge facing Testing currently? 
I’ve a feeling that testers have given up the passion to learn something new. There are exceptions, but the majority of testers I met here in my place are not interested in reading blogs and writing one themselves. They are just doing testing because their employer wants them to do it!

What’s the biggest mistake you’ve ever made when testing? 
Working 36 hours straight without sleep trying to finish testing and make the release/production date.  And this was after working 8 weeks straight putting in 60-70 hour weeks.  Because I was tired I missed a last minute change and bug in the application, and when it got out in the real world it caused the application to corrupt data.  Thus we didn’t have a lot of Jim Hazen
www.softwaretestingclub.com happy customers on our hands, the press ate us alive for it, and management was breathing down my neck asking why this was missed.    The lesson I learned is that you cannot kamikaze yourself and at the same time be effective in your job.  Working insane hours will only lead to mistakes and defects escaping to the wild.  I didn’t push back on the deadline and say we needed a few more days to finish.  I put a big ‘S’ on my chest thinking I was Superman when in reality it only stood for ‘Stupid man’.  It’s a mistake I have not repeated the rest of my career .


What’s the biggest challenge facing Testing currently? 
Commoditization.  There has always been a mindset that Testing is not a ‘skilled’ profession (or a profession at all), and that anyone can do it.  This has become worse with outsourcing and off shoring of the work.  There are too many people using it as a way to ‘get in’, and companies (both client and provider) that see it as a job that doesn’t need the same skills and calibre of people as development.    The fact is you need people that are highly skilled and experienced to do the job effectively, and economically.
What’s the biggest mistake you’ve ever made when testing? Trying to please your boss and accept his demand for making a release-recommendation statement. If you as a tester and/or test manager are the only person who has to judge over Go or No-Go, then you will probably be the only person blamed for either delaying a release or getting held responsible for a bug that slipped through. The best approach I have experienced was at a company where the complete team, BA, DEV and Testing was involved in the decision.

How to improve the software testing craft? 
Seek out new knowledge. Read. Learn. Talk to other testers. Try new things. Start a blog 

How to improve the software testing craft? 
Don’t be afraid to switch jobs frequently in your early career.  You will gain a greater variety of skills faster,  plus get a clearer picture of what you are looking for in your career.  Once you figure out what you love to do and are good at, then you can start specializing.   

What is the most valuable “tool” you use to aid your testing? 
One valuable tool I’m discovering at the moment is the value of coaching and teaching others as a tool to deepening your understanding.   I’m starting to think that perhaps coaching and mentoring testers is an essential tool in a testers toolkit. 

Scripting languages comparison

Related article:
Thoughts on test automation (Brett Pettichord)

Last week, I went through a lot of printed articles waiting to be read and among them were a few from Brett Pettichord on test automation.

If you don't know who he is, he wrote LESSONS LEARNED IN SOFTWARE TESTING with James Bach and Cem Kaner and is one of the experts in test automation.

The HOMEBREW TEST AUTOMATION article (available on www.pettichord.com) has interesting comparisons between scripting languages as follows:




Ruby, Python and Perl are all supported by SELENIUM.

The article is rather old (2004) but the comparison is still useful especially for testers that consider learning their first scripting language.

This convinced me of learning Ruby so will give it a try in the next months.

So far, the only language I used for test automation is VB Script with QTP.

But I am open to better language alternatives, especially since QTP is so expensive and Selenium is free :)

Alex







Brett Pettichord - thoughts on test automation

The following were selected from various articles of Brett Pettichord from 2002 and 2004. 
                                                               
- Focus the automation strategy on key features or concerns that can benefit from the added power of automated testing

- Find a way to leverage the staff skills available for testing

- Test automation is a software development process:

    1. establish milestones and deliverables
    2. define requirements
    3. manage source code, test data, tools
    4. design before coding
    5. review the code
    6. test the code
    7. track automation bugs
    8. document for other users

- Test automation is a significant investment:

    1. staff time commitment can dwarf tool cost
    2. prepare for 10 times manual effort
    3. maintenance effort can be large

- Automated regression tests find a minority of the bugs and most of them are found when the tests are created

- Can you automate at times when you can't otherwise test?

- Capture-replay fails; tests are too sensitive for user interface changes

- Test tools are buggy

- The problem with bad automation is that no one may notice:

    1. tests may not do what you thought
    2. tests may no longer be interesting
    3. coverage may be poor
    4. false alarms may be common
    5. test results may be false

- Test automation projects require skills in
    1. testing
    2. development
    3. project management

- Don't mandate 100 percent automation

- Use pilot projects to prove feasibility

- Use abstraction techniques for the user interface changes:

    1. window maps
    2. data driven tests
    3. task libraries
    4. keyword driven tests
    5. api driven tests

- Avoid complex logic in your tests

    1. keep tests linear: easy to understand, hard to make mistakes
    2. encapsulate necessary complexity

- Don't build libraries simply to avoid repeating code; hodge-podge libraries make tests hard to understand and maintain; design your libraries, don't just whip them together on the fly

- Use standard scripting languages, avoid proprietary languages

- Have testers and programmers jointly charter automation projects

- Speed the development process instead of trying to save a few dollars on testing

- Test automation tool components:

    1. languages

        * tie other components together
        * describe tests
        * provide dedicated libraries

    2. interface drivers

        *interact with the product software under test

    3. test harnesses

        * collect and execute multiple tests
        * report test results

    4. remote agents

        * provide test related services on remote machines

    5. test parsers

        * interpret and execute tests expressed in convenient form

    6. test generators

        * create new tests based on models and algorithms

- Scripting languages for testing

    PERL
        1. well established
        2. vast libraries

    TCL

        1. well established
        2. compact
        3. popular with embedded systems

    PYTHON

        1. concise support for object oriented programming
        2. integrates well with java

    RUBY - best

        1. everything's an object
        2. principle of least surprise

    VB & VBSCRIPT

        1. popular
        2. integrate well with Microsoft technologies
        3. not open source

    TCL, PYTHON and RUBY have line interpreters.

    Another up and coming scripting language is GROOVY. Like Jython, it runs inside the JVM.

    PERL has the most libraries of any of the languages listed here. For many people, this is a good reason to use PERL.

    The lack of a line interpreter for PERL is to me its greatest weakness.

    The lack of native support for general templating is PYTHON's greatest weakness.

- Understand the motivations of the test tool vendors

    1. want you to think they are experts in testing: guess what? they are not
    2. encourage belief in tool magic
    3. achieve lock-in when your test library is in their language
    4. focus on improvements that people will see during the eval period: enhancements for power users go on the back burner

- Test automation architectural patterns
       
        1. scripting frameworks
        2. data driven scripts
        3. screen based tables
        4. action keywords
        5. test first programming
        6. api tests
        7. thin gui
        8. consult an oracle
        9. automated monkey
        10. assertions and diagnostics
        11. quick and dirty

- Test automation common problems

        1. spare time test automation
        2. lack of clear goals
        3. lack of experience
        4. testing the wrong stuff
        5. high turnover
        6. reaction to desperation
        7. reluctance to think about testing
        8. technology focus
        9. working in isolation

- Rules of software development

        1. define requirements
        2. manage source code, test data, tools
        3. design before coding
        4. review test automation code
        5. test the code
        6. track automation bugs
        7. document for other users
        8. establish milestones and deliverables

- SEVEN STEPS TO TEST AUTOMATION SUCCESS

        1. follow the rules of software development
        2. improve the testing process
        3. define requirements
        4. prove the concept
        5. champion product testability
        6. design for sustainability
        7. plan for deployment

- 2 focuses for test automation

        EFFICIENCY

            - reduce testing costs
            - reduce time spent in testing phase
            - improve test coverage
            - make testers look good
            - reduce impact on the bottom line

        SERVICE

            - tighten build cycles
            - enable refactoring and other risky practices
            - prevent de-stabilization
            - make developers look good
            - increase management confidence in the product

        Automation projects with a service focus are more successful.


- User interface change: your gui test automation will likely break. what can you do?

        1. prevent developers from changing them
        2. design your automation so it is adaptable
        3. test via non-user interfaces

- Quality attributes for test automation

        1. maintainability
        2. reviewability
        3. repeatability
        4. integrity
        5. reliability
        6. reusability
        7. independence
        8. performance
        9. simplicity

- Top 10 reasons for automating tests

    1. manual testing sucks
    2. tool vendor said that capture replay works
    3. afterwards, we can fire all those pesky testers
    4. everybody else is doing it
    5. big bucks already spent on the test tool
    6. looks good on the resume
    7. keep the intern busy

- Gradual test automation: test automation benefits from a gradual approach; build some tests and se how they run before adding complexity

- Use a pilot project:

    1. validate your tools and approach
    2. demonstrate that your investment in automation is well spent
    3. quickly automate some real tests
    4. get a trial license for any test tools
    5. scale your automation project in steps

- Reasons for automating

    1. speed up testing
    2. allow more frequent testing
    3. reduce manual labor costs
    4. improve test coverage
    5. ensure consistency
    6. simplify testing
    7. just want testing to go away
    8. define the testing process
    9. make testing more interesting and challenging
    10. develop programming skills
    11. justify cost of the tools

    REASONABLE REASONS
    - high reuse
    - speed development
    - expand reach
    - smooth development

    UNREASONABLE REASONS
    - simplify testing
    - force organization
    - 100% automation
    - justify tool purchase

    MIXED BAG
    - regression testing
    - build skill and morale

- What are your success criteria

        1. the automation runs
        2. the automation does real testing
        3. the automation finds defects
        4. the automation saves time

- ARE YOU READY TO AUTOMATE?

    1. is automation or testing a label for other problems?
    2. are testers trying to use automation to prove their prowess?
    3. can testability features be added to the product code?
    4. do testers and developers work cooperatively and with mutual respect?
    5. is automation developed on an iterative basis?
    6. have you defined the requirements and success criteria for automation?
    7. are you open to different concepts of what test automation can mean?
    8. is test automation lead by someone with an understanding of both programming and testing?

- WHAT HAVE WE LEARNED ABOUT TEST AUTOMATION

    1. keep it simple
            - test automation tends to complicate things
            - the test suite itself will need to be tested
            - make sure the test suite meets the original goals

    2. build flexibly and incrementally
            - build and deliver test automation in stages
            - deliver a proof of concept early
            - deliver automation updates regularly
            - package the automation so that it can be installed easily
            - document all dependancies

    3. work with development
            - test automation is development; get help from your development experts (developers)
            - incorporate automation milestones into the development plan

    4. keep tests visible
            - visibility facilitates review
            - review encourages realism about test coverage
            - test suites require review and improvement
            - dont assume that old features are covered by existing tests
            - assess test suite weaknesses and product risks
            - are these tests still useful?

    5. use multiple approaches
            - it is better to use several approaches half-way than to use one perfectly
            - test expertise and manual testing are still required
            - manual testing is a sanity test for the automation
            - there will always be need for exploratory, non-repeated testing

    6. get an early start
            - an early start makes it more likely you can improve testability and get test APIs
            - your tes strategy will include automation from the start

    7. commitment is essential
            - it is easy for test automation to be designed as a side project. it wont get the resources it needs
            - commitment ensures that test automation gets the resources, cooperation and attention that it needs
                    * from management
                    * from development

Thoughts on testing - James Whittaker

              
- Take your testing up a level from test cases to techniques

Test cases are a common unit of measurement in the testing world. we count the number of them that we run, and when we find a bug, we proudly present the test case that we used to find it.

I hate it when we do that.

I think test cases - and most discussions about them - are generally meaningless. 
I propose we talk in more high-level terms about test techniques instead. for example, when one of my testers finds a bug, they often come to my office to demo it to me (i am a well-known connoisseur of bugs and these demos are often reused in lectures i give around campus).

Most of them have learned not to just repro the bug by re-running the test case. They know i am not interested in the bug per se but the context of its exposure. there are the type of questions i like to ask during bug demos:

1. what made you think of this particular test case?
2. what was your goal when you ran that test case?
3. did you notice something interesting while running the test case that changed what you were doing?
4. at what point did you know you had found a bug?

At Microsoft, we have begun using the concept of test tours as a way to categorize test techniques.

- It is our task to use testing as an instrument of improvement

- This is the true job of a tester: to make developers and development better; we don't ensure better software - we enable developers to build better software; it isn't about finding bugs, because the improvement caused is temporal

The true measure of a great tester is that they find bugs, analyze them thoroughly, report them skillfully and end up creating a development team that understands the gaps in their own skill and knowledge

Making developers better, helping them understand failures and the factors that cause them will mean fewer bugs to find in the future

The real value of tests is not that they detect bugs in the code, but that they detect inadequecies in the methods, concentration and skill of those who design and produce the code (Tony Hoare)

- Testing without innovation is a great way to lose talent

Testing sucks. now let me qualify that statement: running test cases over and over - in the hope that bugs will manifest - sucks. it's boring, uncreative work. 

What is interesting about testing is strategy, deciding what to test and how to combine multiple features and environmental consideration in a single test. the tactical part of testing, actually running test cases and logging bugs, is the least interesting part

High Stakes, No Prisoners - Charles Ferguson

I just finished this great book about the first 5 years of the Internet and the many changes it brought to the high technology companies and everybody's life in general.

Written by one of the founders of Vermeer Technology (creator of Front Page, later sold to Microsoft), the  book is an excellent presentation of the rise and fall of Netscape, the Internet revolution and of course Microsoft, both the smartness of the company and the predatory behaviour.

There are so many interesting things in the book and on so many topics like:

- everything about a high tech start up
- why is Microsoft so successful at destroying other companies
- venture capitalist practices
- how to create a software product architecture
- how to hire and not to hire a CEO
- and many others

Some of my favourite paragraphs are below:


- Whether through haste, overconfidence or ignorance, in 1994 and 1995 Netscape 
made a series of catastrophic technical and strategic errors that eventually proved their 
undoing. These included sloppy, indeed almost non-existent, technical architecture; 
foolish, immature hype that awakened Microsoft; failure to create proprietary advantage; 
failure to generate 3rd party support and lock-in;poor testing and quality control; 
excessive attention to minore markets to the neglect of windows

- the next mistake was the hiring of a non-technical CEO

- the CEO of a serious high technology company must have a serious technical 
background, or at least the ability to understand technology and a deep appreciation of 
its importance in strategic and organizational decisions

- i have a generally low opinion of professional CEOs, although some are superb. They 
frequently have poor technical backgrounds, no concern for anything other than their 
bank accounts, and skills that are more political than substantive. And their track records 
are not great. Of the industry's most successful companies, there's only one - Cisco - that 
has been run for a long time by a conventional professional CEO. Intel, Microsoft, Oracle, 
Aol, Compaq, Dell and Gateway are all run by founders, early employeesand/or former 
academics who somehow do okay despite not having Harvard mbas and perfect resumes. 
When you stack them up against the people responsible for the last 20 years performance 
of Apple, IBM, Netscape and Lotus and many others, you come away thinking that 
unqualified, socially dysfunctional, impatient, ruthless, egomaniacal founders arent so 
bad after all

- writing a clever piece of code that works is one thing; designing something that can 
support a long-lasting business is quite another. Commercial software design and 
production is, or should be, a rigurous, capital - intensive activity. Software products 
should be based on a broad, deep structure that can support much more than whatever 
the product contains at any given time. In addition to code that works, you need 
documentation, help functions, error handling, multi-platform support, and multiple 
languages. You also need an underlying architecture that allows you to add and 
change features, purchase and integrate external software components, and allow 
other software vendors to make their products talk to yours, add customized widgets to 
it, or embed your product into something larger of their own. A good architecture, one 
that will carry you for a decade's worth of unpredictable technology and market 
changes, takes months to develop. But if  you skip this step, as Netscape did, you 
have made a truly Faustian bargain

- one goal of good architecture is to enable systems to be partitioned so that people 
can work effectively in parallel, their progress can be measured periodically and 
everyone can finish at the same time

- There were lots of kids and Unix people at Netscape, but few seasoned technologists 
with PC software experience. Andreessen's job as chief technology officer was 
supposedly deciding what Netscape's future products should be. But he had no 
experience, and there was no chief architect; so for architectural change, who was 
minding the store? Moreover, of the entire top management team, only one person 
had any experience with a pc succssful software company

- the mangling of large companies by polished, politically astute, non-technical CEOs, 
which is a major part of what happened at Netscape, is a movie that Silicon Valley 
has seen many times before. ….. firing highly visible CEOs is all too rare, no matter 
how bad they are. Ben Rosen has done it twice, when he replaced the CEO of 
Compaq in 1991 and again in 1999. More often, however, everyone sits around, 
afraid to be the first to blow the whistle and provoke a fight. But by the time an obvious 
crisis forces action, it's often too late. 

- Netscape's story also provides a remarkable case study in how to run a high tech 
company, and how not to, at both strategic and operational levels. 

- Perhaps never in American history has a new company faced larger opportunities 
and such profound choices as Netscape did.

- Netscape's browser was a mess from the very beginning. Their first browser was 
relatively simple, so choosing hacker style and spaghetti code to reach the market 
fast was tempting. But there is a general rule in software engineering that the cost of 
fixing an error doubles with every step down the engineering sequence, from initial 
conception to architecture to commercial release, and these costs continue to double 
with subsequent releases. 

- Navigator 1.0 had almost no architecture. In Netscape's rush to market, it was 
conceivably justifiable to do this. Even if so, the failure to fix it immediately by developing 
a real architecture for the next product was utterly, totally fatal. Navigator 1.0 was 
throwaway code that they didnt throw away.

- underlying each of the next browsers (navigator 2.0, 3.0, 4.0) was the non-architected 
faustian bargain of the navigator 1.0. With each release, Netscape's cost structure, 
technical limitations and development times worsened relative to Microsoft's, as did its 
ability to support new features and interfaces, with the result that Netscape's lead, which 
was at least a year, was eliminated within two years

- Netscape's failure to architect its browser was just one of many engineering errors. 
Another, also serious, concerned testing. Software quality assurance is a complex activity, 
with a range of best practices and capital-intensive software tools for automating testing, 
bug tracking and error correction. For complex products, you even design the testing 
systems into the original architecture of the product. That's not the way Netscape did it, to 
put it mildly. According to some reports, Navigator 1.0 testers were hired by placing notes 
on bulletin boards offering to hire random college students at ten dollars per hour. A 
dangerous casual attitude toward QA persisted even well after Netscape had focused 
on mission critical application and server products for conservative, quality-conscious
 Fortune 500 clients

- during the development of Communicator 4.0, Netscape allocated about half of its 
browser development team to an effort to redesign the browser from scratch, using the 
java language. It would be difficult to exxagerate the idiocy of this choice. At this writing, 
nobody yet uses Java to write large Windows applications, because its performance is 
still mediocre and its tools, environment and user interfaces are insufficiently mature. 
After a few months, Netscape canceled the effort

- Netscape went to great lengths to make its own life more difficult and succeeded.

- Netscape decided to develop its early browsers so that they can be developed and 
released for many operating systems simultaneously, not just the multiple versions of
 Windows but also the McIntosh, OS/2 and a half dozen varieties of Unix. These operating 
systems and their user interfaces are fundamentally different from one another. 
Consequently, Netscape developed a major piece of software, the Netscape portable 
runtime layer that covered up the specific features of various operating systems. Netscape 
developers then wrote code targeted to this generic intermediate layer rather than to 
specific operating systems. This decision was an enormous strategic error. …. it was a 
major technological and engineering disaster. It meant that Netscape deprived itself of the 
best available tools for developing software, particularly for Windows.


Generate alternatives in testing

Related article:
Login Page testing mind map


The following images are taken from Edward De Bono's great book on lateral thinking.

They are an invitation on reading the book for learning how to think differently when testing.

So, the problem offered is the following: given a square, find as many ways as possible to divide it in 4 equal parts.

This is the square:



Before looking at the answers, try to come up by yourself with as many solutions as possible to dividing the square in 4 equal parts.

The next solutions are most interesting especially after you tried this exercise by yourself.

Good luck!!!

Scroll down in the page when done :)







































The first easy way of dividing the square is this:






Another easy way follows:



Less obvious 2 solutions (black and blue) are:



Even less obvious 4 solutions (black, red, green, blue) are:


You think this is it, right?

More solutions are displayed below.

Keep scrolling ...................








































And, yes, there is at least one more solution using circle arcs, which I cannot create because of lack of graphical skills. You will have to check the book for it but it is similar to the last solution.

Now, why is this important to testing?

It shows that no matter how you, as a tester, are testing any application functionality, there are many other ways that the same functionality can be tested by.

The book referred above explains a different and creative way of thinking, the lateral thinking.

Any tester interested in improving his skills should read it in my opinion.

Alex

PS: Sorry if the square actually looks like a rectangle, I noticed this mistake too late.

80-20 rule applied to testing

Everybody heard about the 80-20 rule that says that 80% of the results are coming from 20% of the subjects.

This can be applied to any field as follows:

- 80% of the revenue of a company is coming from 20% of the clients

- 80% of the donations for a charity are coming from 20% of the people

- 80% of the books from a bookstore are purchased by 20% of the clients

For software, this could mean that

- 80% of the clients are using 20% of the functionality

- 80% of the bugs are caused by 20% of the functionality

I used to think that this is how things are as the rule is too attractive in its common sense and simplicity.

The problem is that when you investigate it a little bit, things are becoming a little more complicated.

Joel Spolsky has the following opinion on the topic in this article:

A lot of software developers are seduced by the old "80/20" rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.
Unfortunately, it's never the same 20%. Everybody uses a different set of features. In the last 10 years I have probably heard of dozens of companies who, determined not to learn from each other, tried to release "lite" word processors that only implement 20% of the features.
This story is as old as the PC. Most of the time, what happens is that they give their program to a journalist to review, and the journalist reviews it by writing their review using the new word processor, and then the journalist tries to find the "word count" feature which they need because most journalists have precise word count requirements, and it's not there, because it's in the "80% that nobody uses," and the journalist ends up writing a story that attempts to claim simultaneously that lite programs are good, bloat is bad, and I can't use this damn thing 'cause it won't count my words. If I had a dollar for every time this has happened I would be very happy.
When you start marketing your "lite" product, and you tell people, "hey, it's lite, only 1MB," they tend to be very happy, then they ask you if it has their crucial feature, and it doesn't, so they don't buy your product.
Bottom line: if your strategy is "80/20", you're going to have trouble selling software. That's just reality. This strategy is as old as the software industry itself and it just doesn't pay; what's surprising is how many executives at fast companies think that it's going to work.
How does this apply to testing?

Well, the project release date is fixed so you cannot test everything well.

So, test only 20% of the application as this is what the majority of the users will use.

Select the 20% of the application's functionalities that have the highest risk and test them well.

Test the remaining 80% of the functionalities by just taking the happy paths.

You think you did a good job, the project manager is happy with the results.

And after the release, the support team receives lots of issues from the clients about the 80% of the application not tested well.

More, the senior management of the company starts noticing problems all over the application too.

The solution is, of course, applying an endless number of patches with bug fixes for the issues discovered by the customers, frustrating the customers as much as possible and wasting as much time as possible for both the development and testing team.

How familiar is this scenario?

Classic Testing Mistakes

This is too good not to be published here for easy future reference: Classic testing mistakes re-visited.

For anyone who is interested in reading the original article (by Brian Marick), this is the link.

This is the original list of classic testing mistakes:

The role of testing

 ·  Thinking the testing team is responsible for assuring quality.
 ·  Thinking that the purpose of testing is to find bugs.
 ·  Not finding the important bugs.
 ·  Not reporting usability problems.
 ·  No focus on an estimate of quality (and on the quality of that estimate).
 ·  Reporting bug data without putting it into context.
 ·  Starting testing too late (bug detection, not bug reduction)


Planning the complete testing effort

 ·  A testing effort biased toward functional testing.
 ·  Underemphasizing configuration testing.
 ·  Putting stress and load testing off to the last minute.
 ·  Not testing the documentation
 ·  Not testing installation procedures.
 ·  An overreliance on beta testing.
 ·  Finishing one testing task before moving on to the next.
 ·  Failing to correctly identify risky areas.
 ·  Sticking stubbornly to the test plan.


Personnel issues

 ·  Using testing as a transitional job for new programmers.
 ·  Recruiting testers from the ranks of failed programmers.
 ·  Testers are not domain experts.
 ·  Not seeking candidates from the customer service staff or technical writing staff.
 ·  Insisting that testers be able to program.
 ·  A testing team that lacks diversity.
 ·  A physical separation between developers and testers.
 ·  Believing that programmers can’t test their own code.
 ·  Programmers are neither trained nor motivated to test.


The tester at work

 ·  Paying more attention to running tests than to designing them.
 ·  Unreviewed test designs.
 ·  Being too specific about test inputs and procedures.
 ·  Not noticing and exploring “irrelevant” oddities.
 ·  Checking that the product does what it’s supposed to do, but not that it doesn’t do
 what it isn’t supposed to do.
 ·  Test suites that are understandable only by their owners.
 ·  Testing only through the user-visible interface.
 ·  Poor bug reporting.
 ·  Adding only regression tests when bugs are found.
 ·  Failing to take notes for the next testing effort.


Test automation

 ·  Attempting to automate all tests.
 ·  Expecting to rerun manual tests.
 ·  Using GUI capture/replay tools to reduce test creation cost.
 ·  Expecting regression tests to find a high proportion of new bugs.


Code coverage

 ·  Embracing code coverage with the devotion that only simple numbers can inspire.
 ·  Removing tests from a regression test suite just because they don’t add coverage.
 ·  Using coverage as a performance goal for testers.
 ·  Abandoning coverage entirely.
 

Good books for new testers

I found recently 3 good books that new testers can use for improving their testing knowledge.

All of them are written by the author of the http://enjoytesting.blogspot.in blog, Ajay Balamurugadas.

He trained with James Bach and is an adept of the context-driven testing school.

These are the links to the books.

They are not free but very cheap ($8 for all 3):

What If

What If - 50 tips for winning testing contests

What If - 50 tips to boost your productivity

Thanks.

Alex

James Whittaker - About Google Search


The problem with Internet search is that being stupid about it is profitable. The more ugly blue links you serve up, the more time users have to click on ads. Serve up bad results and the user must search again and this doubles the number of sponsored links you get paid for. Why be part of the solution when being part of the problem pays so damn well? It's 2012 and we are still typing search queries into a text box. Now you know why, a 'find engine' swims in the shallow end of the profit pool. Is it any surprise that technology such as Siri came from a company that doesn't specialize in search? (Where do you place an ad in a Siri use case?)There's no more reason to expect search breakthroughs from Google than there is to expect electric car batteries to be made by Exxon.

Your success at search depends on how good you are at it and how much time you devote to it. Users have noticed and abandoned search in droves, particularly mobile search. They voted with their fingers and installed apps to do the search work for them. Apps are capable of sorting through just the portions of the web you might be interested in. Don't search the web for soccer scores, use an app. Don't search the web for hotel deals, use an app. Apps are better because they cut search out of the equation. Apps succeed in large part because search is so broken.

You want a prediction of the future? The trend of disappearing search will continue. The web will melt into the background and humans will progressively be removed from their labor intensive and frustrating present by automation. In five years the web is likely to be completely invisible. You will simply express your intent and the knowledge you seek will be yours. Users will be seamlessly routed to apps capable of fulfilling their intent. Apps won't need to be installed by a user; they will be able to find opportunities to be useful all by themselves, matching their capabilities with a user's intent. You need driving directions? Travel reservations? Takeout? Tickets to a show? Groceries? Tell your phone, it will spare you the ugly links. It will spare you the landing page. It will spare you the ads. It will simply give you what you asked for. This is already happening today, expect it to accelerate.