Forward thinking on security
Bruce Schneier has written an excellent blog post talking about “backwards thinking” on software security. Using the recent California security review of voting machines as an example (all the machines tested failed — but were conditionally recertified for use by a state official, as long as the easily found flaws were patched), he said that too much security thinking today is:
“If the known security flaws are patched, then the product must be secure. If there are no known security flaws, then the product must be secure.”
No, no, no, no, no. Bruce insists that people developing software have to demonstrate that their system is secure. The burden of proof should be on the developers to show that they designed and built the system properly — not on the testers to demonstrate that the system is hackable.
The government, including the military, use such forward-thinking security development processes. So do companies who build software for commercial airplanes. We know that good development is possible. Why, oh why, do state and local governments support development efforts (like the voting machines) that use a backwards-thinking, security model?