Know your attacker
Which is more secure, a product wherein one security flaw is found each year -- but is only fixed six months later -- or a product wherein one equally serious security flaw is found every week -- but where it only takes a day before the flaw is corrected? This question underlies most attempts to compare the security records of open- and closed-source software; Microsoft's Internet Explorer and Mozilla Firefox come to mind as a good example. More security flaws are uncovered per unit time in Firefox than in Internet Explorer, but they also tend to be fixed sooner, leaving (at least by some reports) a smaller number of "days of vulnerability". Like most good questions, the answer to this one is "it depends". In this case, it depends upon whom you're worried about defending against.I'm going to be utterly simplistic and divide attackers into two groups: Those who find vulnerabilities in software, and those who merely exploit vulnerabilities which have already been widely published. Obviously script kiddies fall into the second group, while "real" black hats fall into the first group; hereafter I'll just refer to those two representative attackers in place of the groups to which they belong.
If you're worried about being attacked by script kiddies, your job is easy: Make sure you're not affected by any published security vulnerabilities. You make sure that you're subscribed to your vendor's security announcement mailing list and that you install patches they provide promptly; you subscribe to mailing lists like Bugtraq; and in the worst case, if everybody is talking about a vulnerability for which no patch of workaround is yet available, you simply pull the plug. As long as patches are always promptly available, you have nothing to worry about.
If you're worried about being attacked by black hats, life is much harder. Your security doesn't just depend on making sure that you're not vulnerable to any known issues; your security also depends on making sure that you're not vulnerable to any unknown issues which a black hat might be able to find. At this point, you don't care how quickly known issues are addressed; instead, you should carefully audit both the code you are using and your choice of code to use, to reduce the chance that an attacker will be able to find a new vulnerability. Google got this right: Last year, having decided that they wanted to use libtiff, ImageMagick, and gzip (and almost certainly other code as well) they hired Tavis Ormandy to perform a code audit. If they hadn't, odds are that there would be some compromised GoogleBots roaming the Internet by now.
Of course, most people don't worry about black hats, for good reason: They simply don't have anything which would make them a target. A few years ago I attended a talk from the systems manager of the Oxford Supercomputing Centre, where he made the points that
- They have a lot of bandwidth, which makes them a target for script kiddies.
- They sound far more impressive (the supercomputing centre at Oxford University, wow!) than they actually are (their largest cluster has under 100 CPUs), which increases the chances of them being attacked by idiots.
- They don't handle any sensitive data, so the primary cost of a successful attack is in reduced system availability during the cleanup.
If, like the Oxford Supercomputing Centre, you're mainly worried about script kiddies, software with frequent but promptly fixed security issues is the most secure software for you. If, like Google, you have reason to worry about being attacked by black hats, the speed with which bugs are fixed is far less important than how many bugs there are and how easy it is for a black hat to find one. Whichever side of the fence you fall on, remember this: The software which is the most secure for you isn't necessarily the software which is the most secure for me.