Following my last post, I wanted to look at each of Forrester’s recommendations and add a little color commentary where I thought that they either don’t go far enough or have some gaps that you should not overlook.
- Improvement No. 1: Define Quality To Match Your Needs. When you read the headline, it sounds great. But unfortunately, it’s framed in the context that it’s not possible to deliver perfect quality, so try to focus on delivering good enough quality. I don’t disagree with the premise – Zero Defects is a myth. Even 5-nine’s quality delivers some real horrific stats when applied to things like plane crashes and survival rates of open heart surgery. Now software is never a life-or-death situation, but it’s important that when you define “good enough” that the metrics you define are tied back to the impacts on the business. How do the technical/operational metrics impact application performance, revenues, customer attrition, support costs. If you frame “good enough” within this context…and have measurable metrics…and the ability to measure accurately enough (think about building something that requires tolerances of millimeters, but all you have is a yardstick)…you’re in a good place. But in my mind we see too many products rushed to market (Google is a prime culprit) that just aren’t ready. There’s a mindset that it’s OK to ship beta-level products to your customers as a general release that I don’t believe is healthy.
- Improvement No. 2: Broadcast Simple Quality Metrics. I believe this is directionally right, but only half the story. On one hand this speaks to the maturity of one’s engineering processes. If you have no metrics at all, you likely don’t have much of a process and almost no chance to improve it as you can’t tell whether any changes you make help or hinder the effectiveness of your software engineering team. This is an area where Ness SPL’s strategic consulting practice has helped clients install proper metrics programs. But the other piece where simple metrics fall short is that it implies that you’re looking at your own team’s performance in a vacuum. You should also try to measure yourself against other peers so that you can identify where you have the greatest opportunity for improvement and therefore where to focus the efforts of your SLDC process re-engineering efforts.
- Improvement No. 3: Fine-Tune Team And Individual Goals To Include Quality. Absolutely agree. If you don’t measure your people on the quality of software they deliver, it won’t happen. And importantly, these measures aren’t just for the testing team, but the entire R&D organization.
- Improvement No. 4: Get The Requirements Right. I’d say this is a bit of a “Duh” statement if it wasn’t for the number of times that companies don’t get the requirements right. In a Capers Jones study “Software Quality In 2010: A Survey Of The State Of The Art”, defects injected at the requirements stage are the number one source of delivered defects, the hardest to prevent and the most costly to repair.
- Improvement No. 5: Test Smarter To Test Less. There are a number of methodologies to try to optimize the effectiveness of your testing regime based on risk. I want to amplify one of the suggestions that Forrester made because I didn’t think it came through clear enough, but feel is a very important point: focus your testing on sections with high rate of code change. If the code didn’t change, it’s highly unlikely the test results will.
- Improvement No. 6: Design Applications To Lessen Bug Risk. If you look at Six Sigma dogma, Dr. Deming’s third principle is “Cease dependence on inspection to achieve quality.” The object is to design quality into the product from the outset. Again looking at the Capers-Jones study, defects injected at the Design phase are the most severe and pervasive. Forrester states: “Architectural complexity, spaghetti coding techniques, and poor design all increase the likelihood that your application will contain bugs. Mitigate that likelihood with better design principles such as separation of concerns, frameworks, and design patterns to reduce design complexity and the likelihood of bugs in your code.” ‘Nuff said.
- Improvement No. 7: Optimize The Use Of Testing Tools. Another area of full throated support except for one thing: tools alone don’t do very much. You need the expertise to know how to get the value of the tool and create a sustainable benefit. I am a big fan of test automation, but as I’ve written before, the line you draw between desiring a test automation program and an actual well run, sustainable test automation program, is not always straight.
Are you following any of Forrester’s “7 Steps”? Please provide share your thoughts and reactions in the comments.