Don’t Test The Testers

In the last 18 years of delivery I’ve lost count of the number of times I’ve heard “No, we don’t need to test that, it’s just a minor bug”, followed closely by the familiar wail of Operational fire engines pulling up to a disaster.

It’s all too easy to panic and cut corners when a client is on the warpath, demanding to get something out quickly.

What’s usually not evident is the pressures the client is on from the top of their food chain. Be it a chief executive who has neither the time or humility to treat their staff with respect, or a manager who’s stuck their neck out over the little piece of functionality you’re not bothering to test.

What you need to consider is if the client’s becoming subjective, emotional or rude now, what are they going to be like when they’ve told the world about their cool widget, and it brings the server down?

The general development cycle for most agencies is:

Discuss -> Define -> Develop -> Implement locally -> Push to server/s

No matter how well any of the teams have performed in the previous steps, once the end-result gets “eyes-on” from the client (or worse, the end user) if there’s an issue, somewhere a fire sparks into life.

Thankfully, over the years I’ve seen all departments recognise the importance of testing, how it affects the end user and how badly untested work reflects directly on them.

First off. Get it out there at the point of sale. Tell the client. Tell everyone. Bugs are a part of life. Every industry has them; it’s just over the years those (slower moving) industries have had the chance to create a robust QA procedure which makes the end product QC hard to beat. Audi, for instance, has been destroying their cars for 75 years to make sure the end user gets a high-quality product which lasts (we call it stress testing, but the effect is much the same).

Including testing at the point of sale shows two things:

A. An honest statement of risk, based on fact (which every agency has); and
B. The proposed route you’re going to travel to mitigate the risk becoming an issue.

If you’re looking at two companies, one of which shows you a real-world problem and how they’re going to overcome it and another which refuses to acknowledge the risk in front of them, how is the latter going to react to a situation they’re not aware?

Who Does the Testing

Many companies will tell you they test. What they’re not always open about is who is doing the testing.
In his article “What is work ethic?”, Andrei Draganescu states. “…you should not defer testing responsibility to managers, QA or less senior peers.”

Ensure any testing function is not handled by the people creating the bugs in the first place. The same principle applied at school when you were not allowed to mark your exam; it’s too easy to dismiss or worst ignore an issue you’ve created.

This isn’t to say developers are bad at their jobs; there’s more to testing than meets the eye.

Many people fall into the Dunning Kruger trap, where due to access to too much information they think they can thoroughly test an application. Contrary to popular belief it’s….

Not Just “Clicking About”
Commonly called “clicking about” regression testing usually does involve a lot of clicks as the testers travel a user journey and ensure the functionality and form is as it should be.

One of the additional steps a tester may take is negative testing. Here they try and use the site to perform a common goal, but go about it in an alternative way.

This is harder than it looks and easy to miss when you’ve worked in development for many years. But the important take-home message is you are testing the site based on the engagement of a key demographic. An older user may approach a goal in a different way to a younger one, capturing that difference is critical to testing.

It’s then not just about knowing there’s a problem, but also what caused the issue. A good example might be changing the URL – the first task is to find the problem by changing the URL structure, the next is to understand why that has happened and consider the implications of where else it may occur.

Having a website “work” for a web developer is excellent news, but a better headline is if it works for the target audience of the product.

In the Beginning…
Testing needs to be at the very heart of any project, and this means ensuring the test team are there at the beginning.

Day One. Kick off. Once you’ve defined your project success, this is what you’re testing against on a granular level using test cases.

Test Cases
While developers may use unit tests to cycle through the functionality of their code (not covered in this article, but worth looking up), most testing functionality will fall in ‘test Cases’.

These are based on functional elements for the end user. The beauty of test cases is their shareability with the client and the relative simplicity of the language used. That’s not to say there is no skill putting these together; test cases need to cover a wide range of scenarios and define the expected outcome accurately – easy to spot after the fact, but difficult to identify on a virgin project.

Automated Testing
One of the advantages of digital is the ability to get one program to test the worthiness of another. Seems logical, but it does have some drawbacks.

Often a situation arises where a client signs off functionality on staging, but this appears broken on production. The primary cause of this is usually a mismatch between production and staging (these should be kept as similar as possible), which in turn causes a new element to end up in a position it wasn’t in before.

Another scenario is the item on production was never working in the first place, but it had been spotted with additional regression testing after deployment by the client/team.

In reality what can happen is you get a combination of all scenarios in varying proportions. A cocktail of code working, some functionality affected by deployment, and a little already not rendering, but unearthed during further testing.

Automated testing allows the team to ensure functionality on live works as it did on staging across as many functional scenarios as you care to write test scripts. These can be executed as and when needed with no consideration for a massive testing resource overhead.

This allows a core number of professional testers to do what they’re good at – using a feedback mechanism
after analysing automated test results to create more robust test cases based on any false positives and write bug reports for developers to work through.

Automated testing does have a downside and these need to managed. The rendering of elements is not always easy to test (i.e. a form may work, but its fields might not be aligned).

There is an overhead setting up automated testing (though over a large number of releases, this pays for itself). As the product evolves, the automated tests will need to be reviewed and modified to fill any gaps in the initial setup.

Conclusion
Testing is essential for any development project. Technical debt is pervasive and can seriously affect not only hearts and minds, but also the client (and agencies) bottom line.

It’s worth bearing in mind that users are customers, and bugs affect their journey through your website. This journey often ends in a purchase or further engagement – disruption can hurt sales figures.

If your agency looks at you blankly when you inquire about testing, actively think about walking away. I know for a fact if Audi hadn’t completed crash tests I’d be looking at another brand of vehicle. As a client, the last thing you want is your website being a car crash of bugs and poorly executed code.

The business case for testing is obvious, while it may cost a little more to contract with a company who does test, the initial overhead means your staff will have less wasted time in doing their testing, and the releases should be cleaner.

 

The article was written by Ben Maffin & Rebecca Boardman.

 

Ben Maffin – After delivering the impossible in 2017 moved into a Product Manager role – where he continues to support delivery teams by process and sheer force of willpower.

Rebecca Boardman – Having tested the impossible, Bec heads up the Reading Room test team, supporting internal and external stakeholders root out, report and then fix technical debt.

 

References:

Dunning Kruger – https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

 

Audi – https://www.audi-mediacenter.com/en/safety-and-quality-assurance-265

 

Andrei Draganescu – https://hackernoon.com/what-is-work-ethic-40cd0c637976