Byon April 15, 2010 1:34 PM
For a long time the security community has been talking about responsible disclosure, however, while the Wikipedia entry describes the process as one that depends on stakeholders agreeing to certain period of time before a vulnerability is released, this can still lead to months of a vulnerability going unpatched, and with ‘responsible’ disclosure, users usually have no idea, even if the vulnerability is being exploited in the wild.
Now, in some cases, it makes sense. Dan Kaminsky’s DNS Cache Poisoning discovery had not been seen in the wild at all, and the community was able to use the opportunity to upgrade almost all DNS servers in one fell swoop. The vendors were very willing in this case.
I’m not advocating that a security researcher immediately post their findings publically (aka, full disclosure), though I’m not opposed to that. I think that sometimes there is value to the pain some researchers express at doing the responsible thing. The vendor should absolutely be the first one notified, but in my opinion, public disclosure, needs to happen faster than it tends to. If vulnerabilities aren’t posted to sites like The Open Source Vulnerability Database, then a lot of work is duplicated, but furthermore, customers aren’t even able to properly protect themselves.
Some developers will simply not attempt to fix security issues, or will propose workarounds that aren’t acceptable to all users, as in the rant linked above.
The real reason I don’t think responsible disclosure works, is that many times vulnerabilities that are already in the wild aren’t being publicized properly. With full disclosure, customers can help prioritize fixes. Customers can institute work arounds that might provide them the temporary security they need. Intrusion Detection Systems can be outfitted with signatures that can help prevent live compromises. A lot of things can happen that are likely to make us safer.
Then, there is the side of the coin, that disclosure doesn’t make vendors any better at software development. It doesn’t make them any less prone to the same old mistakes. At the Pwn2Own 2010 competition at CanSecWest this year, the researcher who exploiting Safari on the Mac to root a Mac did so for the third year in a row. In minutes. Using the exact same class of exploit that he’s been using all along. The same mistake just keeps happening.
This year, he choose not to reveal the specific exploit, instead running a session on how he scans for the vulnerabilities, with the hope that vendors will start doing this themselves, since it’s mostly done via automated fuzzing. While I’m not arguing for no disclosure, as was done in this case, at least Mr. Miller presented on his techniques, such that Apple and others can finally get their act together on this all too common class of errors.