stack twitter tryhackme rss linkedin cross

Wilco van Esch

Skip to main content

Search results

    Getting time to add unit and integration tests retroactively

    In How to handle technical debt I explained how teams that have a handle on repaying technical debt actually make it work. Now I want to go into more detail specifically on getting the following done:

    • Move time-consuming or frustrating manual regression checks to unit and integration tests
    • Move time-consuming or unreliable end-to-end UI-level checks to unit and integration tests


    A cross-functional product team that has been active for some time and spends a non-trivial amount of time in their development cycles running UI-level checks and/or performing manual regression checks. The team wants to be more efficient and replace these with earlier, faster feedback. However, they are not able to just go and do it. Some teams are and don't need any of this help. Others do, such as teams that spend 100% of their capacity on feature work.


    Here's what you do:

    1. Identify only the worst check you want to move to a lower level of testing.
    2. Describe the technical task in your issue/project management system.
    3. Justify it by comparing time saving to the time investment.
    4. Chase the task's planning/inclusion into the team's development cycle.
    5. Rejoice, because now you do have the time to move tests to lower levels.


    Identify it

    Here's what a team might be tempted to do: document & evaluate all their current manual regression checks and/or end-to-end UI-level checks for how long they take to execute versus the value they bring, discuss for each check whether they can be adequately covered by unit and/or integration tests, create a plan that specifies when and how each test should be moved to what level, create a workgroup to start handling all that work, and plan regular status meetings. If you have all the time in the world and you enjoy bureaucracy, do this.

    Or, do the worst check first, enjoy a slight improvement to everyone's lives, then do the next worst check, repeat.

    You also don't have to fully analyse the test suites to find out what that worst check is. For manual regression checks, the person doing the checks will have a keen feeling for which one is the most time-consuming or the most frustrating to set up the test conditions for compared to how likely it is to lead to meaningful issues. For end-to-end UI-level checks, simply check the pipelines and look for the test scenario that takes the longest.

    You do want to give a little thought to whether the tests you're moving to lower levels of testing are actually covering meaningful risks, whether there are more meaningful risks that are currently uncovered, and whether the tests you're moving are in fact already covered adequately at lower levels. That should inform which ones you actually move on to the next step for.

    Describe it

    Create a task. Name it something like "Create unit/integration tests for {functionality}".

    Justify it

    In the task you can add the justification. This will be useful when promoting it with the team, such as in a backlog grooming / refinement session. So what is the justification?

    The most persuasive argument is going to be time saving.

    Time saving is obviously attractive to a business. Time is money. It's also easy to wrap your head around and a value that can be expressed in numbers. In your company you might even need to express it in numbers. So that would mean estimating - over some period of time such as a year - the amount of time writing and maintaining the new unit and/or integration test(s) and subtracting it from the amount of time you currently spend running and maintaining the checks that will be replaced. Don't spend a lot of time on this or you're just reducing the time saved. All you need is a rough estimate.

    Other major benefits of moving tests to lower levels of testing, such as earlier feedback and more reliable test results, are less easily estimated and less persuasive when you produce the estimates for a single task. If anything, you're going to take away the persuasive force of your justification, so only mention these benefits by name or leave them unsaid until you need them.

    Chase it

    Once the task has been agreed to be valuable enough, you just have to ensure you or someone else in your team grab it from the backlog, or formally plan it into a sprint if that's how you work.


    The team didn't have time! The team wanted the time! Now the team has the time!

    The buts

    But these new unit or integration tests don't test everything a tester picks up on.

    We're only taking explicitly defined test scenarios and automating those. Those already didn't fall under the other things a tester picks up on. Those more interesting risks are the ones the team can continue to focus their brainpower on.

    But the time saving can never weigh up against the value of feature work.

    That's true if the team works on validated customer requests and on fast market tests of new ideas. There are 2 ways out: 1) You could be wrong and the task is more valuable, because the rest of the work isn't, 2) Promote the long term instead, as not doing one of these tasks might not make a great difference, but never doing them will keep bottlenecks in place and cost the company real opportunities due to a slower time to market. If the effort as a whole is valuable enough, we must do the parts.

    But I don't know what unit or integration test should be written instead.

    It's enough to know what test scenario is currently covered by a UI-level regression check and/or manual regression check that deserves replacing. Describe that scenario. Then the team can discuss how to adequately cover it at lower levels, or you get ahead of it by sitting with a developer and thinking it out together. You could even rubber duck it with an LLM.

    But I care more about a UI-level check that's unreliable than one that's slow.

    I'd hope you'd be in a position where immediate problems like a flaky test that makes the pipeline fail when it shouldn't can be tackled immediately by the team. If not, then sure, create a task for the check that would have most return on investment for being moved to lower levels rather than necessarily the slowest one.