By UI-level end-to-end tests I mean tests which cover a user flow (the path to complete a task) both horizontally (from start to end of the flow) and vertically (the test involves the entire stack). Specifically I'm going to talk about automated UI-level end-to-end tests, which drive a browser to run the tests.
Levels
Level | Cost | Speed to Run | Confidence Gained | Reliability |
---|---|---|---|---|
Unit (solitary, i.e. classes or functions) | Low | High | Low | High |
Unit (sociable, i.e. includes some dependencies) | Low to Medium | High to Medium | Low to Medium | High to Medium |
Integration (component, service, or contract) | Medium | Medium | Medium | Medium |
End to end (UI, acceptance) | High | Low | High | Low |
The test pyramid
There you see the rough reason why these levels of testing are traditionally represented as a pyramid. At the lowest level — of a unit test against a single class or function — the test takes a trivial amount of time to run, the test is so close to the code it's relatively quick to debug, the test is relatively quick to write, and the test is unlikely to pass or fail inconsistently. The downside is that a solitary unit test like this doesn't tell you very much.
On the other extreme you have a UI-level end-to-end test scenario. If even a single part of the flow succeeds (for instance, after submitting a valid username and password a My Account link appears) you know a number of things are working together correctly. The downsides are that the test is relatively slow to run, the test could pass or fail inconsistently for various reasons such as fluctuating page load speed, the test is costly to maintain as it depends on a relatively large part of the codebase, and a failing test is more difficult to debug as it lives several layers of abstraction away from the code.
Justification
With all those downsides, is it even justified to have automated UI-level end-to-end tests? I would say it is if you limit the scenarios to the highest priority paths through the application. For the low cost of just a few scenarios, you can increase a team's confidence in their output by showing that the most important flows in the application can be completed successfully in a realistic simulation. See it as automated acceptance tests on top of all your other tests.
Example
For an eCommerce company I covered priority flows which involved the searching for a product, navigating to a product page, adding the product to the cart, going to checkout, and filling the checkout information. To identify these priority flows, you (or a Data Analytics or User Experience specialist near you) can analyse the user behaviour in your analytics suite and come to an agreement with your stakeholders or product owner. Be careful not to overreach. For example, logging in (maybe even by API) to be able to start a flow is fine, but testing various happy and unhappy paths for logging in, account creation, forgotten password, etc. might not be best placed in your end-to-end tests. These might be better done at lower levels.