[Written ca. 2006]
Introduction
There has been a great deal of debate over the last few years about the various merits of proprietary code development versus open source development. One of the qualities that has been claimed for each kind of software is that it (whichever "it" you happen to be arguing) produces more secure software.
My position in this debate is to look at a more complete definition of security, and to look at measurable data. Security is much more than the number of patches announced, or the length of time to produce them. Security is definitely much more than whether a system has lots of buffer overflows -- or none. A system with no software flaws and no patches is still not secure if it doesn't adequately control access, protect integrity, provide availability, and otherwise meet security policy goals. From experience and analysis, we can see that the manner in which software is developed has little to do with whether these properities are designed in properly. Security cannot be assured without software quality, certainly, but the quality of coding is but one factor in whether a system is secure.
Problems with code quality are why so many people claim that open source is more secure than proprietary source products. There has been an unending stream of patches for security flaws in software products produced by vendors. Many of those flaws are the result of careless coding and uneducated design. Some vendors have also been remarkably slow to respond to reports of vulnerabilities so as to provide fixes for their products. Proponents of open source claim that these are problems that are (or can be) mitigated using open development and examination: various factors of the marketplace that may complicate the response of proprietary code vendors are not present in the open/free software environment.
Unfortunately, many of those proponents are unfamiliar with a wide range of products (including historical systems): their arguments tend to be based on examples of code produced by only a few major vendors. The most highly trusted systems ever produced, such as S/COMP and GEMSOS, were produced as proprietary products; some of the code may actually have been classified! Additionally, there are many open source artifacts that are actually supposed to be part of the security core of networked systems that have been produced, released, examined, and used for months and years before basic flaws have been found in them. This has included OpenSSL, OpenSSH, and the Linux kernel. Thus, there is certainly evidence that proprietary development may be better than depending on open source.
The thrust of my position is that security is an absolute property that must be designed in from the beginning, coded with care, and enforced throughout the software development lifecycle. This embodies a set of issues that are orthogonal to whether the source code is open or not -- it depends on training, design, and use of appropriate tools. Thus, the nature of whether code is produced in an open or proprietary manner is largely orthogonal to whether the code (and encompassing system) should be highly trusted.
I first presented these views in March 2000 at the Danish Open Systems Conference in Copehagen, Denmark. The title of my presentation was Why Open Source Software Only Seems More Secure. The audience was prepared to be hostile, I believe, but by the time I finished they largely seemed to agree with most of my points. The Powerpoint slides I presented at that meeting are available here. I later presented roughly the same material at the Extreme Linux Conference in Santa Fe, NM, in February of 2001.
Other Views
In 2004, one of my students, David L. Wilson, provided another look at the proprietary vs. open source issue as his Master's thesis project. In addition to many of the points I have already made, Dave looked at issues of communication and perception, reaching the conclusion that the perception of security and vulnerability was actually one of the major, but overlooked issues in this debate. He and I co-authored an article that appeared in the September 24, 2004 Chronicle of Higher Education on this topic.
Here are some other interesting, related references I have run across. This is not intended to be a comprehensive list, but rather one of items with some interesting data or rigor of analysis to back them up.
- Some analysis of why simply finding and reporting flaws may not actually be a good use of our time and resources.
- Analysis shows that the "many eyeballs" hypothesis associated with open source is, at best, only true in some limited cases.
Screen Savers Essay
The following is a short essay I wrote in conjunction with my November 6, 2002 apperance on TechTv's The Screen Savers show.
Is Open Source More Secure?
We often hear debate about which is more secure: open source or proprietary source. Each side makes arguments and refutes the arguments of others. In truth, neither is correct (or both are). Whether or not source is proprietary does not determine if the software is better.
Instead, other factors are more important in determining the overall quality and trustworthiness of a system:
- Completeness and consistency of design
- Training and dedication of developers
- Type and quality of tools used to develop the system
- Extent and fidelity of testing
- Complexity of the user interface
From this standpoint, few current offerings, whether open or proprietary, are really trustworthy, and this includes both Windows and Linux, the two systems that consistently have the most security vulnerabilities and release the most security-critical patches.
Know thy user
Security is highly dependent on context and requirements. A system that is adequate for use by a trained technologist in a closed development environment has very different requirements than a system deployed for use in business WWW server applications, and neither is likely to have requirements similar to a high-security military communications environment.
Unfortunately, too many people base their decisions on acquisition cost or compatibility of word processing software or upon the simple comparison of only two or three systems. The result is, not surprisingly, a significantly vulnerable computing base.
Know thy history
Many of the people who are most vocal in this debate have never formally studied security and have no experience with any of the security-certified systems. Thus, the comparisons made to justify their arguments are really too limited.
For instance, some of the most secure systems ever developed (e.g. S/COMP, GEMSOS, Trusted Solaris) were proprietary. However, those same systems met the old Orange Book criteria for B-3 or A-1 trust certification. Those systems developed in a proprietary environment were by organizations that employed consistent specification methods, strong development tools, and highly trained personnel. They also were able to develop specific design criteria with security as a core principle.
However, this is not a characteristic that is always associated with proprietary source. No open-source systems have been developed the same way, and few of today's proprietary systems are developed with such care.
Some open-source projects are clearly more trustworthy than their proprietary counterparts. As an example, compare the Apache Web server (open source) against the Microsoft IIS server (proprietary code). The IIS code consistently has five to 10 times as many serious security flaws reported each year than does the Apache server.
Yet compare the vulnerabilities reported in Linux this year against those for Solaris or AIX (both proprietary source), and you find that Linux has three to five times as many vulnerabilities. If you look at several years of such reports and do the comparisons, it becomes clear there is no argument to be made generally for open vs. proprietary, although it is certainly possible to say that some systems are less likely to have flaws that render them vulnerable to attacks.
Debunking the open-source arguments
Open-source advocates often claim that the true security benefits of open source are that the code is open to inspection by many eyes and that flaws, when found, can be patched quickly. Let's look at each of these arguments separately.
That many eyes can review the code is not necessarily correlated to making software more secure. For one thing, if the people examining the code don't know what to look for or are depending only on manual inspection (without testing), then there is no guarantee they will find security-related flaws.
Faults that depend on subtle interactions with other software or hardware are easy to miss unless they are examined together. Few people have the training to do security audits that include nuances such as these.
This is why we have seen reports of security flaws in software that has been in the open for years, such as Kerberos and OpenSSL. What is more surprising is that these are packages of security software in widespread use! Clearly, more is needed than simply making the software available for inspection.
The second argument is based on an abuse of the word "secure." A system that needs frequent patching is not secure, even if it is simple and quick to apply those patches. By analogy, if you owned a car that frequently blew up or ran into walls because the brakes or steering failed, the fact that you could replace the bumpers or tires yourself would not make the car "safer." A secure system is one that works correctly under stress and under attack -- it does not need frequent patching.
That one can more easily patch Linux than Windows simply means that it may be more maintainable, or easier to administer. By itself, maintainability is not security.
A perfect system?
I personally use Mac OS X and Solaris. In our center, we use those systems, as well as FreeBSD, OpenBSD, every version of Windows, Mac OS 9, and several versions of Linux (including Debian, RedHat, and SUSE), along with several research systems.
I've used all of these, and at least another score of operating systems over the last 20 years. I've seen excellent software produced in closed shops and in the open-source community. I've also seen terrible software produced in each.
Our experience has been that having trained administrators, good policies, appropriate tools, and a mixed environment of systems can result in a highly dependable and productive computing environment. In both development and operation, the human factor transcends the issue of how open the source might be and is the most important for security.