stack twitter tryhackme rss linkedin cross

Wilco van Esch

Skip to main content

Search results

    How much testing is enough in software testing?

    Answers to this question typically fall into two categories: narrative and numerical.

    Narrative

    Testing is never done, but you should stop when all relevant risks have been covered to the point you can afford (budget) and accept (agreed with stakeholders) / Testing is done when you've comprehensively explored all high and medium priority functionality / Testing is done when a skilled test specialist judges they have gathered enough information about the product for informed decisions to be made about its delivery / Testing is done when you've exhausted all the test ideas you can come up with

    Numerical

    Testing is done when 100% of planned test cases have been executed / Testing is done when at least 80% of the test scenarios are passing / Testing is done when 0 issues are reported from automated or manual test suites / Testing is done when test coverage is above an agreed percentage, e.g. 95%, and all those tests have been executed with 0 fails

    My perspective

    The exit criteria in the numerical category upset me and I will explain, with a number, why that is. The chance that your documented tests accurately represent all the tests that matter is precisely 0 (zero). There simply are too many ways in which software can disappoint.

    That being said, feeling very strongly about an opinion can be a sign of missing important nuance. I will seek that nuance. If you're working in a highly regulated environment, I can imagine you have to demonstrate your formal exit criteria and how you measure them in concrete KPIs such as executed test cases and passed test cases. I would still hope your testing wouldn't stop at executing already documented tests, but perhaps prescribe a certain coverage for session-based exploratory testing, and include a formal signoff – if you really work in that type of environment – based on the informed advice of the test specialist(s).

    Another nuance I can think of is marketing agencies producing high-turnover near-identical email, banner, or microsite campaigns. You could choose to have standardised test suites of which you can say that if 100% pass, it's automatically a go for that campaign. Those test suites could also include less prescriptive high level test scenarios for exploring functional and non-functional requirements, thus covering off more risks whilst still figuring into a number.

    I lean more towards the narrative category. Testing is done when a skilled test specialist judges that - within a particular context of risk tolerance and regulatory requirements - additional time spent on testing is not worth the additional information that would be revealed. Narratives, however, are not the most practical thing. If I would have to give practical exit criteria for testing in the environment of Agile web development (in which I have done most of my work), I would say:

    1. Requirements have been interrogated and testing needs anticipated (both can be done in 3 Amigos and Refinement sessions)
    2. Local testing and meaningful code reviews have been completed
    3. Static, unit, integration, and UI-level test scenarios are passing
    4. Acceptance criteria in the User Story have been verified
    5. Other relevant functional and non-functional requirements have been verified (exploratory as well as standards from your test strategy)
    6. The impact of code changes (performance, live issues, etc.) has been monitored after deployment to production

    When all these have been done, you can stop testing that piece of work. Or can you? You could argue that once a code change is in production, it's in your care forever, but anything resulting from that care would be a new piece of work.