The chilling effect on software security

Bruce Schneier, in a blog posting today, argues (convincingly) that it’s important for researchers and white hats to publicly release details about security vulnerabilities in hardware, software and Web sites. He writes,

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies — who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities…. It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

I agree with Bruce. I think that public scrutiny, which can lead to PR fiascoes, and from there to lost sales and lawsuits, is arguably the only factor driving big companies to fix their products in a timely fashion. Without the pressure of public disclosure, problems — and not just those limited to security vulnerabilities — would be addressed more slowly, or not at all.

Not everyone shares that viewpoint, of course. Big companies abhor the practice, and many believe that security flaws should be kept strictly confidential. There’s also a debate about whether software companies should be given advance notice of vulnerability discoveries, so they can issue patches before the vulnerability is publicly unveiled.

Bruce (pictured) points us to an excellent article just published on CSO Online, “The Chilling Effect,” by Scott Berinato. You should read it, and also its two sidebars by Microsoft’s Mark Miller and Tenable Network Security’s Marcus Ranum.

Now think for a bit. It’s one thing for you to argue for (or against) security vulnerability disclosure for the products you consume, say, from Microsoft or Sun or IBM or Oracle or Novell. Is it another thing for you to argue for (or against) security vulnerability disclosure for the products your development team create? Often, there’s a double standard: disclosure is good for the other guy.

Why should that be?

Separately: If you don’t read Bruce Schneier’s blog, you should. It’s always informative, and sometimes scary.

Z Trek Copyright (c) Alan Zeichick
1 reply
  1. knechod
    knechod says:

    Since I am now, after reading your column, an expert on the subject of security disclosures (grin!), I offer my humble idea.

    Have the security players band together and create a reporting list. When a defect is discovered, it is added to the list, and the issue is tracked with the time of report and time of public resolution. Based on certain industry norms, software manufacturers’ pleadings, and common sense, calculate (negotiate) a threshhold for a) how long an issue stays on the list before becoming eligible for full disclosure, and b) an everage time to resolution that, once reached, makes all current and future issues immediately eligible for full disclosure. Finally, the count and severity of issues should always be available.

    This would allow a software company to argue for what they consider to be a reasonable time to disclosure based on their market and practices (“Hey, we are CMM level 5, we NEED 3 weeks!”), and would provide a balance between ‘responsible’ and ‘full’ disclosure.

Comments are closed.