At the 23rd National Information Systems Security Conference (NISSC) in October 2000, Gene Spafford was presented with the National Computer Systems Security Award.
This annual award is presented by NIST and the NSA (Formerly the NSA's National Computer Security Center (NCSC)). The award is granted for outstanding contributions towards the advancement of computer security technology, and is generally considered the most prestigious award in the area of information security and assurance. To warrant the award, a nominee must meet at least one, and preferably several, of the following criteria:
The award is not necessarily a "lifetime achievement" award, but can be given on that basis.
Other winners of the award have been:
Year | Recipient(s) |
---|---|
1988 | Stephen T. Walker |
1989 | Willis Ware |
1990 | James Anderson |
1991 | Roger Schell |
1992 | Walter Tuchman |
1993 | Robert Courtney |
1994 | Donn Parker |
1995 | Dennis Branstead |
1996 | Whit Diffie, Martin Hellman, & Ron Rivest |
1997 | David Clark |
1998 | Butler Lampson |
1999 | Dorothy Denning |
2000 | Eugene H. Spafford |
2002 | Peter G. Neumann |
2006 | Virgil Gligor |
2007 | Steven M. Bellovin |
2008 | Michael D. Schroeder |
2010 | Jerry H. Saltzer |
2011+ | Award Discontinued |
The inscription on Spaf's award reads:
For recognition of outstanding contributions to the field of computer security as founder of the Center of Education and Research in Information Assurance and Security (CERIAS) and his approach in computer security through collaboration with key members of academia, government, and industry to promote and support programs of research, education, and community service; for his analysis of the "Morris Worm"; for developing PCERT, the first accredited academic incident response team; for publishing several of the leading computer security books for practitioners; and for his leadership at Purdue University, one of the world's leading institutions in the education of computer security graduates.
The award bears the signatures of William Mehuron, the Director of NIST's Information Technology Laboratory, and Michael J. Jacobs, the Deputy Director for Information Systems Security at NSA.
The presentation was made on October 16, 2000 in the opening plenary of the NISSC conference. Spaf was given time to make some comments to the plenary. The following section represents the printed text of his remarks, which were delivered approximately as written (these comments were not read verbatim).
Thank you.
Several times in prior years I sat out in the audience at this event and listened as others received this award. In each case, I was impressed when I heard the accomplishments of the recipients. I would tell myself that when I was more of a senior researcher, that maybe I would do something really significant that would qualify for this award. Then, after each conference I would go back to Purdue and continue working on some of my strange ideas.
So, at first I was really quite surprised when I was informed that I had won this year's award. After all, I don't even feel all that senior! Then, as I thought more about it, I realized that I have been involved in information security and assurance as a practitioner, researcher, and educator for over 22 years. Maybe I'm a little more senior than I thought. (That might explain why I have to hold these papers at arm's length to read them!) After listening to that description of what I've done, I have to think that maybe I have done a few things that have made a difference.
I was told that along with the award, I would be given 10 or 15 minutes to make some remarks here today -- on any topic I might choose. This immediately brought to mind the subject of misplaced trust. Not as a topic for me to address, but as perhaps a description of the offer! However, I decided that maybe I could find a better use for this opportunity to address a captive audience. And my hosts have asked that share with you some of my insights about security, and what I have been thinking about security issues.
Let me start by doing my Cassandra imitation. This is something that I have been doing for years, so it is well practised. As you may remember from mythology, Cassandra was given the gift of being able to tell the future, but also cursed so that no one would believe her. In my vision of the future, I think the security landscape is going to get worse before it gets better. I am sure that many of you don't want to believe this, either.
The underlying reason behind my prediction is the fact of simple human nature, not shortfalls in the technology. If you work with computer and network security long enough, you realize that the biggest problem is people: the people who design the software, the people who deploy it, the people who use the systems, the people who abuse the systems, and sometimes the people who guard the systems. There are certainly many technological challenges to be met, but the biggest problems still come back to people. As technologists we often try to solve every problem with technology -- and that isn't always the best approach.
As security specialists, we spend an awful lot of time cleaning up after human error. And we also expend a great deal of effort performing research to develop technology that forces people to do the safe thing -- such as not be able to turn the security off. And yet, the people seem to find new ways to make things break and fail.
Computer viruses provide an example. I remember 10 years ago, researchers at conferences would talk about the potential for danger of things like macros. We developed mechanisms and algorithms that could prevent macro viruses from posing a huge threat. And yet, the companies that built the software and that needed to be in on these discussions were never represented at the conferences. When any of us would ask why, we were told that viruses were not their problem. In fact, that is still their claim. The people involved didn't see it as a problem that they needed to address -- it is someone else's problem. As a result, at the current rate of growth, by about 2004 we will have over 100,000 known computer viruses. (And as food for thought, consider that about 99,000 of them will likely be for software originating from Redmond, WA.)
So, many organizations now pay substantial fees to keep up-to-date virus scanners on all their machines. In many cases they're paying huge fees to deploy the scanners, but they don't keep them up-to-date. That's another people problem. And whether the scanners are up to date or not, despite numerous warnings, if users click on attachments in email with subjects such as "I Love You," disaster strikes.
It's also the case that we were discussing -- almost a decade ago -- the limitations of pattern-based anti-virus scanning. Researchers were able to show that the only reliable methods of preventing and detecting viruses were through methods such as integrity checking and confinement. Yet, we don't see those mechanisms in software today because the users find them too difficult to employ or understand. Instead, we see a half-dozen commercial scanners requiring frequent updates. Using our same projection into 2004, there will be on the order of one new virus reported every 60-90 minutes somewhere in the world. How will the current paradigm cope with that?
One of our basic problems is that of too much success: We have made portions of computing easy to use without deep understanding by the user. If this was the case with automobiles or telephones, that might be okay. But computers currently are too fragile and malleable. Thus, we have an environment where common people can buy WebTVs for a few hundred dollars, and be connected to the Internet in a matter of minutes, but they don't understand basic issues of firewalls, or viruses, or even how to back up their data.
We haven't done a very good job for the programmers, either. The demand is so significant for new coders, we throw them the "Dummy's Guide to C and HTML" and set them to work. The result is desktop systems with tens of millions of lines of code containing all sorts of buffer overflows and protocol errors, because we haven't given them the tools or instruction to write sound software. The time lost by people staring at blue screens for cumulative centuries of downtime is staggering, and that doesn't begin to include the security problems.
And the system administrators? Well, we provide them networks of thousands of machines in the hands of untrained users. Each machine has dozens if not hundreds of unpatched vulnerabilities that will dribble out over the next few years, each system has way too much functionality, and we provide the end users with mechanisms to download and execute untrusted code from unknown sources without adequate safeguards or auditing.
Then there are the senior executives. They make purchasing decisions based on acquisition price rather than operational cost and risk. They decide to purchase based on the number of new functions instead of stability and quality. So it should be no surprise they end up with systems that satisfy the lowest common denominator and have all the security features turned off out of the box. Then, to secure those systems, they hire allegedly reformed hackers, the vast majority of whom know how to break systems but not how to design them.
Oh, and by the way, the software industry is aware of these problems. Many in that arena are responding -- but by pushing for measures such as the UCITA legislation that will allow them to shield themselves from consequences of shoddy practices, and even to prevent critical public comment on their wares. (I would strongly urge you to educate yourselves about the awful consequences if UCITA is passed in your states; see my editorial in issue E38 of the IEEE Cipher as a starting point or refer to <https://www.4cite.org>.)
Everyone in this loop is looking for the cheapest way out with the maximum profit. Everyone seems to be trying to pass the buck. Unfortunately, quality requires careful thought, a lot of work, and it isn't cheap. And security is indivisibly bound to quality.
My students seem to enjoy my analogies, so I'll throw one in here. Those of us in security are very much like heart doctors -- cardiologists. Our patients know that lack of exercise, too much dietary fat, and smoking are all bad for them. But they will continue to smoke, and eat fried foods, and practice being couch potatoes until they have their infarction. Then they want a magic pill to make them better all at once, without the effort. And by the way, they claim loudly that their condition really isn't their fault -- it was genetics, or the tobacco companies, or McDonalds that was to blame. And they blame us for not taking better care of them. Does this sound familiar?
But it doesn't have to be this way. We can do things better. We need to stop doing business as usual and start focusing on end-to-end quality. Security needs to be built in from the start -- not slapped on after the fact.
First of all, I would suggest to all of you that you stop thinking that the solution to every problem is to write a new DLL for Windows or a new set of HTML pages. Instead, think through what you really need to solve your problem. Consider new architectures, different software, and minimal solutions, even if it might cost more time and dollars to solve it initially. In the long run, you might have something more secure, more maintainable and more easily upgraded -- which are, ironically, the goals of switching to COTS solutions in the first place.
Second, we need to start holding companies and people accountable for their choices. If a company decides to release software with flaws that would have been caught with even minimal testing, they should be held liable. Managers, purchasing agents, and administrators all need to be held accountable for careless acts as well. We can't continue to blame it on the computer when systems fail. They fail because of faulty design and operation.
Third, realize that the average user is pretty darned average. Most of the people we have writing and designing software don't really understand that the typical user of the future (if not the present) doesn't really know how computers work. Instead of laughing over stupid user stories, we should be learning from them and designing better interfaces and documentation.
Coupled with that, we need to rethink the research and education we perform in this area. We should be including psychology, management, economics, and sociology in what we do. We should be focusing on addressing some of problems of actual deployment and use instead of deriving yet more esoteric models that users won't accept. We should be exploring how to build cost-effective security into systems, and how to encourage users to turn those features on. This isn't going to be easy, however, as long as companies are eager to hire people with only a semester or two of programming experience, and as long as academia only rewards accomplishments displaying depth rather than breadth.
And that, in a way, brings me back to where I started this talk and why I'm up here speaking this morning.
I have spent the last dozen years trying to find synergy between disparate fields and concepts. The whole research and education program in CERIAS at Purdue is multidisciplinary in nature. We have linguists working with psychologists working with computer scientists who are working with economists. The results so far have been quite exciting, although we have sometimes had difficulty finding presentation venues where they understood what we have done. Our students clearly benefit from this mix, and we're trying to find ways to share the model with others. I am excited about what we are doing.
In my career so far, it seems I am responsible for some of the foundational research in computer viruses, intrusion detection, firewalls, software forensics, vulnerability scanning, attack and flaw classification, and software quality. I expect I'll add to that list in the 20 or so years I expect to have left in my career.
But when people ask me what I do for a living, I tell them I am a professor.
Yes, I suppose I could leave academia to join a company or even start my own. Maybe, with a little luck and a lot of hard work, my next idea could result in a major financial success. Then, maybe I could join the ranks of people with large stock options, profiles in financial magazines, and people in industry and government might actually listen to my advice. Who knows? I might even attract groupies.....no, that seems way too unlikely.
No, I'm a professor. I'm not in it for the money. I'm in it for some less tangible things that mean more to me. There is something very satisfying and inspirational about being able to guide students to new understanding and insight. Those few of us in academia who work in security are touching the future in a very real way.
50 years from now, I think it unlikely that anyone will be using my scientific discoveries or referencing my writings. And I certainly hope they still aren't using any code I've written! But I am confident I will have had an influence on their world through the hands and minds of my students. And that gives me a sense of real hope for the future.
Thus, in closing, I'd like to say that as a scientist, I thank all the people at the National Computer Security Center and at NIST who selected me for this recognition today. But in addition, it is as one of the few educators over the years to have received this award that I am most honored and grateful.
I thank you all for your patience in listening to me this morning. Enjoy the rest of the conference.