The overlooked factors
(beyond automation and AIQA)
Right now, many organisations are moving in a direction where huge amounts of code can be created very quickly. One risk with that is that less – not more – of the code they create and have to maintain is fully understood by the majority of their engineers. The prioritisation discussions may not be prepared for that.
Most people care about product quality, that isn’t the problem.
People usually do what they think is best for the company, but they may forget that they all have favorites and that ’best’ depends on your bias and your point of view – it’s subjective.
When product- and engineering teams get involved in planning, one person may want to set priorities from a customer perspective, while others have their focus on release planning, modernization, tech debt, performance or adding wanted features.
This is often one big, ongoing discussion that never ends, and that’s the problem: Too much of the time they needed to spend on working together is instead spent on debating or arguing.
I have written a few articles about this over the years, and I believe it’s still quite relevant.
You might think that since we have AIQA, AI-generated unit tests and AI-augmented development now, having great quality can be fully automated. Not so.
We may be faster in many ways, but it still matters what we spend our time on.
One way of looking at quality is to take the helicopter perspective and make sure the available time is always spent where it has the most positive impact. But what is that?
I’d like to share a tool called ”the Quality Barometer” that has really helped improve such discussions in several companies. Some of those companies are famous for their software products.
Using this tool has been a successful way for them to make product- and engineering teams more aligned and have more fruitful discussions, common understanding and agreements instead of debates.
Sometimes it’s also been a great help when aligning people inside engineering teams.
After teams in those companies had used this tool, the rest was just work. Frictionless work with a lot less debating. The enjoyable kind of work.
The tool allowed them to visualise previously hidden factors and it enabled them to discuss using a language everyone can understand, instead of debating things like test coverage percentage or architectural quality.
Note: This tool is meant to enable and guide discussions so that product- and engineering teams can work better together. It’s not meant to replace Sonar or any other technical code analysis tools. Facts and numbers from tools like Sonar are crucial to have and to monitor, but they don’t give us the whole picture and they often don’t give us better discussions.
Using Lovable, I threw together a beta testing page where anyone can try this for themselves. The page only shows a very basic version of the tool, so any organisation that wants to use it in a real situation should also consider adapting it to their own organisation (”What questions should we add?”) and to their own situation (”What should our formula look like?”).
If any one wants help doing this, I’d be happy to help.
To understand the tool, start by reading both of the two articles mentioned on the tool page:
Here is the tool: https://www.ledartankar.se/testing
Beta version made with Lovable. Work in progress.
Don’t share any real code or code names.
No login required. No data is persisted.
Don’t share any real code or code names.
No login required. No data is persisted.
Best of luck – and have some fun with it!
Take care, everyone!
Björn



