Vulnerability Disclosure, Cryptography Research, and Open Source

Today, Bruce Schneier posted an essay to his blog arguing the case for full disclosure of software vulnerabilities, which I am also in favor of. It’s apparently a side-bar to an article in CSOOnline entitled “The Chilling Effect” which is about some of the growing issues surrounding vulnerability research in web software. There’s also two other side-bars arguing the case for keeping vulnerability information secret or only telling the software vendors as well as the hybrid option that has sprung up in the last few years termed “responsible disclosure.”

I’ve always subscribed to the full-disclosure camp for many of the reasons that Schneier points out. I, too, believe that many times the only way to get some software vendors to patch their bugs is to force them to by making the information publicly known and therefore increasing the risk of threat to the vendor’s customers. While the risk to the vendor’s customers increases, what this does is also applies pressure to the vendor’s bottom line, risking them loosing customers through bad press and it being generally known that the vendor produces a shoddy product. When something affects a company’s bottom line, they tend to actually do something about it rather than ignore or spin the problem.

I’ve always been very interested in cryptography. While I don’t have the math background to really pursue any substantial career path or even hobbyist status within the field of cryptography, I am very much a user of cryptography every day and try to stay abreast of the developments, political issues, and so forth surrounding cryptography. One of the things you learn early on when studying cryptography is that if you hope to do any kind of serious research to create new or improve existing cryptographic systems is that secrecy is a big red warning flag. So much so that if part of your algorithm is “proprietary” and kept secret, it won’t even be seriously considered by the cryptography research community and your peers. The only way a cryptosystem can ever hope to be proven strong and secure, and thus gain popularity and market penetration, is through extensive peer review and testing; that cannot happen if parts of it are a secret. The same is very much true of non-crypto software and why many open-source projects are regarded as far more secure than closed-source, proprietary software from big vendors like Oracle and Microsoft. Through massive amounts of peer review, simply because the software can be entirely peer reviewed, software vulnerabilities are found and usually fixed.

It would seem that in different and varying technology fields, openness and full disclosure has proven again and again to produce more secure results whether it be in vulnerability research, cryptography, software development, or any number of other disciplines. It makes one wonder how the argument for secrecy in how a system works can continue to be made in the usual case.

To hear me make the case for some secrecy, or rather, obscurity, you can find some similar commentary to the above regarding cryptosystems in my response to a post by Martin Davies over at the Voice of VoIPSA Blog entitled Security Through Obscurity.

Leave a Reply