Let's assume a stream-aligned development team which is independently able to deliver value in some form, such as feature updates in a section of the application. The team has strong expertise in back-end and front-end development, software architecture, product management, and UI design. The team automatically merges any changes which pass the pipeline stages directly into the main trunk. Changes are hidden from end users through feature toggles.
How does a test specialist fit into this team?
Software quality in DevOps and CD
If the whole team is responsible for software quality, do you need a test specialist? Everyone should build the skill and claim the time to challenge explicit requirements, discover implicit requirements, cover non-functional requirements, write test automation at the right level, perform skilled exploratory testing, and improve testability and observability. If a team has no trouble doing this well, they don't need a test specialist. In practice, teams without test specialists lack in all of the above. Teams with test specialists sometimes also lack in many or all of the above, if you are that test specialist you'll never lack for things to do and improve.
So what might a test specialist practically do, on an average day?
Pairing & ensembling
Feedback should come as close to when the code is being written as possible, to reduce delays and to reduce context switching. There are a couple of ways of going about this.
Ensemble programming (also known as mob programming) means completing the entire development cycle for a discrete piece of work as a team. Feedback can't get more immediate than this. Instead of thinking of a solution on your own and having it be examined much later, the whole team is there to discuss or whiteboard it together. Every line of code is written together, getting continuous collaborative code review. A test specialist should be there to challenge the solution, to generate test ideas, to give guidance on how to make the automated test scenarios meaningful.
When the team does not work with ensembling, a test specialist can give the same input on a smaller scale through pair testing. This can happen when a merge request has passed code review - to make it so retesting is reduced - or as soon as a merge request is created - to keep feedback closer to when code is created - or before code has been written - to challenge or rubber duck the solution before it has been embarked upon. As with ensembling, pairing has additional benefits of accelerated knowledge sharing and resolving questions without delay.
Shifting left & shifting right
There are many more opportunities for improving software quality than a "Ready for Testing" column would suggest. Imagine a team is discussing a feature which would allow working hours to be set. A developer asks what the granularity should be. A discussion follows, comparing the consequences of choosing 1 hour to 30 minutes to 15 minutes. An agreement is reached and the team moves on. A tester could have been there to say "hold on, do we need to account for holidays?" One simple question could prevent a great deal of headache later on.
It wouldn't have to be just one question, of course, nor would they have to be about completeness of requirements. It could also be asking why we think there will be return on investment from this feature. It could also be rewording the requirements to be more lucid, revealing new possible scenarios or edge cases as a side effect. It could also be asking how we're going to prove we've done a good job, do we need to put anything in place to test it and monitor it? It could also be asking about non-functional risks, to help prevent performance issues or security vulnerabilities and so on.
Test automation & testability
A test specialist could inform and/or do the writing of test automation. When a tester writes test automation it's typically in the form of UI-level tests which exercise the entire stack, as lower level tests live closer to the code and are more easily created by those also living close to the code. They can implement and maintain these tests and optimise them for running as swiftly and as reliably as they can and ensure the information they provide is of value. Aside from UI-level checks, they could also write custom test tooling wherever it would provide more benefit than it would cost to create and maintain.
The tester could also inform the test automation by agreeing with the team what happy and unhappy scenarios would make sense to cover at the unit versus integration versus UI level. If the test strategy is for UI-level tests to cover critical user journeys, then they could analyse these journeys. They could also help developers in determining what is a useful unit tests, what is a useful integration tests, are they testing what they are trying to test, what could they add, what could they throw away, what could be more efficient?
Observability & user advocacy
If you have a team with developers, a product owner, and a UX/UI designer, you have a technical perspective, a business perspective, and a user perspective. With a test specialist you have someone who combines all three in one person. You might say that everyone in a team should have all three perspectives, which I agree with, but in practice it's difficult - for example - to get out of the technical perspective when you're knee deep (or neurons deep) in code unless you're participating in a pairing or ensembling session. A tester should think beyond correctness, about what else will matter to users and weigh those things against the cost of developing and testing against them. A tester should also challenge assumptions about who the user is and what the priorities of the business are.
Then, when a code change is deployed, the team should not stop caring. A test specialist can help ensure that a change continues to be monitored - whether they ultimately are the one to do so or not - to see what the uptake is, to see if any unpredicted issues occur, to bring back customer feedback to the team, to set up dynamic threshold alerts to be automatically notified if key metrics become worrisome, to create an environment where the team gets the information it needs to make good follow-up decisions after a change.
When testing is imagined as verification of explicitly defined functional requirements, then there is neither much joy nor value from testing. Testing should go far beyond that and exploratory testing is one way how. This doesn't change in a DevOps or CD environment. Just because it works best to not have a gated testing phase doesn't mean an end to exploratory testing. As soon as there's a branch for new change you could serve the application locally to test the change in the actual user interface, or you may be able to trigger a test environment for the branch, or you may do it in a pairing session, or you may do it in production. Wherever you do it, they are opportunities to uncover unknown unknowns.
This is where a test specialist could use their imagination, their metacognitive abilities, their experience of what they've seen before, and the skill they have hopefully been able to hone for years to efficiently discover all kinds of unexpected functional and non-functional issues and risks. Their mind should be buzzing with questions and they should be running mini-experiments to answer the ones that they think matter most in the moment, leading to more questions and more mini-experiments. This is skilled work only a human can do. It's also difficult to teach, but valuable for everyone in a team to have some ability in.
Continual process improvement
Code is not created in a vacuum. The cadence of delivery and its results might be affected by many things. How are business opportunities and user needs communicated and refined? How do teams align with each other? How do teams align internally? How do teams manage and prevent technical debt? A test specialist is perfectly placed to guide such processes to be efficient yet effective. Their broad involvement across the entire software development cycle, their drive for improvement (or, less charitably, their fixation on imperfection), their knowledge of how to deliver software with high quality and all the aspects that go into it, it all puts them into a good position to make positive changes.
A lot of this might sound like something for an agile coach to do and/or for the whole team. It's certainly not necessary to have these activities done by a test specialist. It should also not be expected that one person has sole responsibility for improving ways of working, but it can certainly help to have one person bring it up and drive it when day-to-day priorities take up everyone's attention.
A test specialist can help a team form a quality culture where the end goal is not just to deliver as much and as quickly as possible, but to do it in a way whereby you can demonstrate that something of value was done, do it as a team in which everyone helped to make the end result the best it could be, and do it with as little waste as possible.
A great coach is someone who seeks to make themselves unnecessary, but is able to intervene at just the right moment with just a few words or a simple question to let a team discover a better way of doing things or bring out the best in an individual. A tester has likely already learned great diplomatic skill due to having made unpleasant news go down easy many times over, has learned humility through being thought of by some as disposable or redundant, and could leverage their social skills to give a team motivation and even enthusiasm in being critical, writing good tests, clearly structuring code, collaborating well, and thinking of ways to grow justified confidence in their own work.
These are some of the ways in which a test specialist can be of value in a "modern" software development team using DevOps and Continuous Delivery practices. Is it too much for one person? Yes. They will have to do more of some and less of some, prioritising whatever the team needs more of at the time. Or you might go as far as having multiple testing roles, such as one focused on test automation, test tooling, and optimising pipelines, and another focused on whole-team testing, shifting left & right, exploratory testing, and testability and observability. Either way, there's a lot more room still for test specialists than merely translating acceptance criteria into UI-level tests.