That’s the question being asked by Bola Rotibi, one of the industry’s savviest analysts. You may know Bola from her time at Ovum, one of the U.K.’s leading analyst firms. These days, Bola is at MWD Advisors, a British firm formed in 2005.
In her most recent blog post, “The dilemma of ‘good enough’ in software quality,” Bola writes,
One of my conclusions was that the case for “good enough” is no longer good enough in delivering software. But some might argue that this viewpoint is promoting the notion of static analysis as a means of perfecting something – “gilding the lily” – rather than an essential tool for knowing whether something is “good enough”.
The attitude of ‘good enough’ has been hijacked as an excuse for “sloppy” attitudes and processes and a “let’s suck it and see” mentality. Surely such an attitude cannot continue to exist in delivering software that is increasingly underpinning business critical applications?
She’s completely right, but one of the challenges she doesn’t address is that it’s difficult to communicate, through requirements documents or software models, exactly how good software should be. “Zero defects” doesn’t mean anything. So, how do you unambiguiously quantify how much quality you’re willing to pay for (and wait for)?
Imagine that you’re writing embedded software. The quality that you need for an anti-lock brake system for your car is different than the quality you’ll need in that car’s satellite entertainment system. The quality of software you’ll need in a digital camera’s auto-focus algorithm, say, a digital camera’s photo-retouching algorithms.
In enterprise software, the quality of that you need in some AJAX code for a bank’s secure online transaction system is different than the quality required for the bank’s “welcome” video message from the chairman on the home page.
How do you express this?