stack twitter tryhackme rss linkedin cross

Wilco van Esch

Skip to main content

Search results

    The over- and underestimation of test automation

    Let's be clever about test automation in web development at scale. If you think manual testing no longer has a place, this post is for you. Conversely if you think programmed checks have low value, this post is also for you.

    Underestimation

    "Test automation is just a series of dumb checks telling us whether what was true is still true"

    Whilst test automation is not limited to automated regression checks, those are already hugely valuable. Imagine putting changes into production every day and having to wait for a suite of manual regression checks to be completed each time to learn whether the changes break anything.

    There's an understandable predicament for testers who have no interest in an SDET (Software Development Engineer in Test) role and find the job market increasingly demands a tester to be just that. However, the response should not be to attempt to deny the value of test automation. The response should be to celebrate that test automation (whether it be done by you or by developers) will provide a test specialist with more time for the interesting aspects of testing, due to being able to outsource the repetitive yet necessary regression checks to one's friendly neighbourhood robot.

    Overestimation

    "When can we expect to reach 0% manual testing?"

    Looking on the bright side, it's encouraging that test automation has become so valuable in web development that some people think it can deliver us from all our woes. Less encouraging is that after at least two decades of teaching and improving the effectiveness and efficiency of exploratory test techniques and teaching that quality is more than verifying a few known functional claims are true, the global test community has failed to reach outside of their own silo and influence a new generation of software engineers and engineering managers.

    Fewer and fewer view testing as more than mere verification of acceptance criteria and/or scripted regression checks and/or arbitrary bug hunting. By those standards it's true — we could handle testing through test automation and, for the rudimentary bug hunting aspect, through developers clicking around their own work and reviewing screenshots from cross-browser previewing tools. Yet this leaves a number of risks unexamined and a number of other risks inadequately covered.

    Where we are now

    "For testing we always add 1 story point."

    Testing has evolved. It has become increasingly fit to handle the typical modern day web development project with its continuous delivery and swiftly changing requirements, thanks in large part to the increased emphasis on test automation at all levels (unit, integration, end-to-end) and all stages (TDD/BDD early on, regression tests in CI, logging and monitoring in production) and the application of Agile (or agile) and DevOps practices of shifting left and using a whole team approach to testing.

    However, with developers taking a more active and integrated part in testing — which is very welcome — and testers changing focus from exploratory test techniques, custom test tooling, and non-functional testing to a primary focus on developing browser-driven end-to-end regression suites, and with no test manager or test coach taking responsibility for tirelessly promoting quality and appropriate ways of achieving it, a number of gaps have opened up and are widening.

    There are now development teams which haven't had experience with professional testers (aside from SDETs) or test managers and think there are three types of testing: 1. unit tests written by a developer, 2. UI tests written by an SDET, and 3. verification where a developer or tester manually checks whether a user story's acceptance criteria are met. Some teams are also aware of integration testing.

    The gaps are filled only when someone happens to have that capability, depending on them having interest in a particular quality aspect and the time to look into it. Validation is done to some extent by product owners, with a business view rather than an integrated user/business/technical view — and this is not usually done until the acceptance stage rather than when requirements are worked out. Negative testing is done to some extent by ad hoc checks on one's own work and dogfooding, and canary releases are sometimes used to compensate for the lack of depth. Performance, security, accessibility, l10n/i18n, and other types of cross-functional testing are done by individual developers (depending on familiarity, motivation, and personality), or are outsourced, in both cases without anyone being able to explain why the chosen test approach is adequate to the needs of the project. Exploratory testing is not done at all. Test tools, harnesses, and frameworks are chosen and built or not built without considering cost/benefit, applicability, or limitations.

    The theme in all this is that we're so preoccupied with delivering a feature and making sure it works for a minimum set of use cases, that important risks are not addressed and are waved away with the idea that addressing risks is not providing value to customers (seeing as it does not provide a new feature to the user) or that these risks do not matter because you can respond quickly when things do fail in production. This is absurd when you could mitigate those risks early on at a much lower cost, preventing or identifying issues which are much harder to resolve at later stages, and when you'd have to have monitoring in place that's more sophisticated and broad than quality controls you use before release.

    Where we can go next

    "So then we just need to have developers think about quality more, right?"

    You do, if you leave out "just". Developers should consider risks, quality, and testing, and testers should promote that and assist with it and train developers in it, but don't expect every skilled developer to also become so skilled in testing as to equal the years of deliberate practice and experience of a dedicated test specialist. What's more, it would require switching mindsets effectively from "how do I make this work?" to "what are all the circumstances under which this wouldn't work or wouldn't meet the needs of our target audiences or the business?" throughout development and in my experience this is an unrealistic imposition on developers in terms of human cognition and available time and personal interest.

    So you can have a team where everyone is a software engineer (aside from the PO/BA, UX, and the Scrum Master), but you may still need at least one person among them who is a highly skilled test specialist. You also need at least one person - ideally the same person(s) - with both the will and the mandate to stand up for quality. You can pontificate about quality being everyone's job, and it certainly is, but in practice it still takes a torchbearer to light the way.

    You could take it a step further and have one person on the team focused on test automation, test tooling, test environments, and supporting developers in their testing, and another person focused on domain knowledge, big picture thinking, exploratory testing, cross-functional testing, and still a degree of technical testing. I've heard it said that software engineers on a Scrum team should be interchangeable. I think that's a harmful idea. Great teams are not homogeneous, they are a group of specialists working together. One of those specialists should know all about the return on investment of every level, phase, and type of testing.

    Conclusion

    We should exhaust the possibilities of test automation. We should apply it wherever it can save us time. We should apply it wherever it can provide swift, reliable, and meaningful feedback. We should apply it to functional testing as well as cross-functional testing. We should explore new avenues like 'AI-driven' test generation. We should also use exploratory testing wherever it can save us time. We should apply exploratory testing wherever it can provide swift, reliable, and meaningful feedback. We should use exploratory testing to uncover scenarios no-one had thought of. We should learn and ask questions and challenge assumptions at all times.