stack twitter tryhackme rss linkedin cross

Wilco van Esch

Skip to main content

Search results

    Test automation beyond end-to-end regression checks

    In discussions about test automation, I've found test automation to be typically understood as building and maintaining suites of end-to-end UI-level regression checks using Selenium WebDriver.

    It can be that. It can also be many other things, without broadening the term to the use of test tools in general. In less time and with relatively little programming skill, you can still increase the speed, coverage, and reliability of some checks that otherwise would have to be done manually.

    Examples I've used in a professional environment where basic scripting skills brought value outweighing the time invested:

    • Build validation through: checksum comparisons, filename comparisons, checking pattern of folders and files, checking for empty files.
    • API testing. Calling endpoints via GET and POST requests, checking the HTTP response code and performing pattern matching on the response body.
    • Build Verification Test, running a priority subset of functional regression checks triggered on every completed build.
    • Fetch email campaigns from test accounts, parse the HTML, and compare links, tracking, and copy against consistently formatted content matrices.
    • Check pages against brand guidelines, uniquely identify sections and check them for colours, font size, font style, character limits, etc.
    • Read a list of links from a spreadsheet, visit them all, parse all links on the pages, check the HTTP status on GET requests on them.
    • Compare spreadsheets to find items that should have been added or removed from a dropdown, then check dropdown values against it.
    • Fill in form fields with test data from a CSV file and submit the form, asserting the entered values are sent and returning the response.
    • For a decision tree, calculate what the product recommendations would be based on scoring conditions for each type of question answered.
    • Follow all distinct journeys through a decision tree and compare product recommendations at the end of it to the expected recommendations.
    • Verify links and tracking specifications against a list of mistakes commonly found in them.
    • Spoof the browser's geolocation repeatedly for a large number of lat/long coordinates and assert a location-based module responds correctly on being loaded from these locales.

    You can also use scripting for collateral tasks:

    • Produce metrics from issue trackers or test reports.
    • Produce live dashboard of progress on and status of current projects.
    • Allow non-technical people to generate test seeds lists via a simple form.
    • Use tracking suite APIs (or web scraping) to retrieve tracking data.
    • Merge major browser versions in tracking data.
    • Identify iOS devices from their logical screen resolutions in tracking data.