Automating regression tests isn’t always the best solution, argued Brendan Connolly at the 2018 fall Online Testing Conference. He presented the “manual regression testing manifesto” and showed how it can be used to differentiate feature testing from regression testing and to decide when to automate or not automate tests.
The manual regression testing manifesto is modeled after the values presented in the agile manifesto. It is an open declaration of purpose, a framework to discuss quality and explore how testers contribute to it.
Connolly mentioned that there seems to be this want for a silver bullet to solve the “testing” problem, especially regression testing, buy the right tool, automate all the tests, let AI handle it. It’s not that those things don’t have value, it’s just we seem to be trying to apply the same type of prescriptive approach to testing that people rallied against during the days of waterfall software development, argued Connolly.
The manual regression testing manifesto has five values:
Connolly stated that just like the agile manifesto, it’s not that there is no value in items on the right, it’s just that we value items on the left more.
The agile revolution showcased the value of communication and collaboration, rather than more process and tools, said Connolly. The testing and QA space needs that same focus; it’s just difficult because testing is an intimate process, and it can be a struggle to find the right words, he said.
Brendan Connolly, a senior quality engineer at Procore Technologies, spoke about manual regression testing at the 2018 fall Online Testing Conference. InfoQ spoke with Connolly after his talk.
InfoQ: Why do we need a manifesto for manual regression testing?
Brendan Connolly: Communication of the skill and intent that drive our motivations as testers is the gateway to showcasing value. Something that can be a challenge is expressing how and why testing and its desired outcomes change throughout the software development lifecycle.
Regression testing is one of those areas that are frequently misunderstood often by both testers and management. The typical advice is to automate this pain away, but not all contexts have that option, or the ROI may just not be there. Just because it’s a regression test doesn’t immediately qualify it as a good option for automation. So to give clarity and enable conversations, I think it helps to have an open declaration of purpose.
InfoQ: What’s the idea behind Behavior over Bugs?
Connolly: It can be hard, especially as a newer tester, to feel like you are contributing if you aren’t finding bugs.
The time to dig for issues and bugs is during feature testing. Regression testing is about minimizing disruptions; we don’t want new features to unintentionally interrupt existing functionality. If you start regression testing looking for bugs, what you’ll end up doing is spending a good deal of time retesting functionality. In my experience, you are more likely to find minor issues unrelated to the latest changes or rediscover old bugs that teams have elected to ignore.
Yes, you found a bug, but unless it’s critical and actually related to the current changes, all you have really done is introduce a distraction. Any bug found during regression is going to get weighed against the pressure to release. This can undermine your credibility with your team, since you seem to be more focused on finding bugs than getting new features to customers.
It’s more important to ensure that to release the behavior customers have come to expect and rely on is maintained in the face of changes.
InfoQ: What is the meaning of the value Common over Complete?
Connolly: At some point in almost every tester’s career, they are going to get asked if they tested everything. In reality, there’s no point in a project when testers aren’t making tradeoffs; testers are really striving to mitigate the most risk in the time available.
Regression testing isn’t about making sure all the boundary conditions have been verified. It’s not about usability, performance or security either. These things are all important, but the time for testing them isn’t just prior to release.
When testers accept ownership of complete testing, then blame is likely to follow. As testers, we need to shift that discussion towards having a complete strategy; a strategy where the regression testing component ensures the core experience of our customers is working as designed.
InfoQ: How can we use this manifesto to improve manual regression testing?
Connolly: The manual regression testing manifesto provides a couple of things. First, it helps define a clear line differentiating feature testing from regression testing, a difference that is often a challenge for testers and management. Each core principle in the manifesto focuses on two elements that both have value in testing. By contrasting their relative value, we define expectations for testing throughout the release cycle. It’s not that one is bad and the other good, it’s that there is a time a place for each and testers need to be able to speak to that difference.
Second, it provides a framework to start discussing quality and how testers contribute to it. It’s easy for people to typecast testers as nefarious breakers of software, when in reality we probably love the software we are testing as much or more than the developers writing it. We don’t have the bond of the creator yet we spend countless hours working with it, just trying to ensure its success. Teams spend a great deal of time discussing coding standards and practices, but code is much more tangible and measurable than testing and quality. There isn’t a common language for testers, so each tester has to find their own voice to put words to their motivations.
What I really hope is that this manifesto provides a spark for testers to ponder what they are currently doing, to define what quality means to them and their teams at each stage of the SLDC. Then be able to easier communicate what they are trying to accomplish so they can better express and surface issues to their teams.