Bola Rotibi on software quality, or lack thereof

Good enough is good enough. But how good is good enough?

That’s the question being asked by Bola Rotibi, one of the industry’s savviest analysts. You may know Bola from her time at Ovum, one of the U.K.’s leading analyst firms. These days, Bola is at MWD Advisors, a British firm formed in 2005.

In her most recent blog post, “The dilemma of ‘good enough’ in software quality,” Bola writes,

One of my conclusions was that the case for “good enough” is no longer good enough in delivering software. But some might argue that this viewpoint is promoting the notion of static analysis as a means of perfecting something – “gilding the lily” – rather than an essential tool for knowing whether something is “good enough”.

She adds,

The attitude of ‘good enough’ has been hijacked as an excuse for “sloppy” attitudes and processes and a “let’s suck it and see” mentality. Surely such an attitude cannot continue to exist in delivering software that is increasingly underpinning business critical applications?

She’s completely right, but one of the challenges she doesn’t address is that it’s difficult to communicate, through requirements documents or software models, exactly how good software should be. “Zero defects” doesn’t mean anything. So, how do you unambiguiously quantify how much quality you’re willing to pay for (and wait for)?

Imagine that you’re writing embedded software. The quality that you need for an anti-lock brake system for your car is different than the quality you’ll need in that car’s satellite entertainment system. The quality of software you’ll need in a digital camera’s auto-focus algorithm, say, a digital camera’s photo-retouching algorithms.

In enterprise software, the quality of that you need in some AJAX code for a bank’s secure online transaction system is different than the quality required for the bank’s “welcome” video message from the chairman on the home page.

How do you express this?

Z Trek Copyright (c) Alan Zeichick
1 reply
  1. Ian
    Ian says:

    What an interesting clash of worlds. Static analysis was originally used to test systems that were too complex to test using the tools available.

    Today, it still serves the same purpose but can also be extended to a general software environment.

    Rather than be used as a replacement for existing testing approaches, static analysis can at least provide some form of risk analysis, which, to take your example, would be critical if designing anti-lock brakes.

    As far as the question of how to establish what is good enough, I agree with both of you, it’s not easy. Any expectation of extracting this from the initial requirements analysis suggests a reality gap.

    However, there is a process that will establish what good enough means and it can be applied to all the examples you gave.

    Business Units are increasingly asking for Service Level Agreements in order to ensure that they get the level of service they want from the IT department. Why?Almost every journalist and analyst over the last few years has told then to do it.

    So if we want to know what is good enough, look at the SLA. If it says that the software must be available for a given period of time or respond within a set number of milliseconds, then that is what the developer and operations must achieve.

    If the testing team cannot prove the response times, the software is not good enough. In a complex system such as banking, where there are too mnay system variables to exhaustively test performance, static analysis provides an mechanism where models can be used to identify the most likely problems.

    Will this solve sloppy coding overnight or erradicate it completely? Not in my lifetime and I intend that to be a long and happy one.

Comments are closed.