This is Chapter 27 of Practical UNIX and Internet Security , Second Edition by Simson Garfinkel and Gene Spafford , excerpted for Congress with permission. Copyright © 1996 by O'Reilly & Associates. No printed or on-line copies may be made for any other purpose without written permission of the publisher. To obtain the book, contact the publisher at the following address or check local bookstores.

O'Reilly & Associates
101 Morris Street
Sebastopol, CA 95472

1-800-998-9938 (in the U.S. or Canada)
1-707-829-0515 (international or local)
1-707-829-0104 (FAX)
http://www.ora.com


27. Who Do You Trust?

When your computer tells you that nobody has broken through your defenses, how do you know that you can trust what it is saying?

Can you Trust Your Computer?

For a few minutes, try thinking like a computer criminal. A few months ago you were fired from Big Whammix, the large smokestack employer on the other side of town, and now you're working for a competing company, Bigger Bammers. Your job at Bammers is corporate espionage; you've spent the last month trying to break into Big Whammix's central mail server. Yesterday, you discovered a bug in a version of sendmail 1 that Whammix is running, and you gained superuser access.

What do you do now?

Your primary goal is to gain as much valuable corporate information as possible, and to do so without leaving any evidence that would allow you to be caught. But you have a secondary goal of masking your steps, so that your former employers at Whammix will never figure out that they have lost information.

Realizing that the hole in the Whammix sendmail daemon might someday be plugged, you decide to create a new back door that you can use to gain access to the company's computers in the future. The easiest thing to do is to modify the computer's /bin/login program to accept hidden passwords. Therefore, you take your own copy of the source code to login.c and modify it to allow anybody to log in as root if they type a particular sequence of apparently random passwords. Then you install the program as /bin/passwd .

You want to hide evidence of your data collection, so you also patch the /bin/ls program. When the program is asked to list the contents of the directory in which you are storing your cracker tools and intercepted mail, it displays none of your files. You fix these programs so that the checksums reported by /usr/bin/sum are the same. Then, you manipulate the system clock or edit the raw disk to set all the times in the backdoors back to their original values, to further cloak your modifications.

You'll be connecting to the computer on a regular basis, so you also modify /usr/bin/netstat so that it doesn't display connections between the Big Whammix IP subnet and the subnet at Bigger Bammers. You may also modify the /usr/bin/ps and /usr/bin/who programs, so that they don't list users who are logged in via this special back door.

Content, you now spend the next five months periodically logging into the mail server at Big Whammix and making copies of all of the email directed to the marketing staff. You do so right up to the day that you leave your job at Bigger Bammers and move on to a new position at another firm. On your last day, you run a shell script that you have personally prepared that restores all of the programs on the hard disk to their original configuration. Then, as a parting gesture, your program introduces subtle modifications into the Big Whammix main accounting database.

Technological fiction? Hardly. By the middle of the 1990s, attacks against computers in which the system binaries were modified to prevent detection of the intruder had become commonplace. After sophisticated attackers gain superuser access, the common way that you discover their presence is if they make a mistake.

Harry's Compiler

In the early days of the MIT Media Lab, there was a graduate student who was very unpopular with the other students in his lab. To protect his privacy, we'll call the unpopular student "Harry."

Harry was obnoxious and abrasive, and he wasn't a very good programmer either. So the other students in the lab decided to play a trick on him. They modified the PL/1 compiler on the computer that they all shared so that the program would determine the name of the person who was running it. If the person running the compiler was Harry, the program would run as usual, reporting syntax errors and the like, but it would occasionally, randomly, not produce a final output file.

This mischievous prank caused a myriad of troubles for Harry. He would make a minor change to his program, run it, and occasionally the program would run the same way as it did before he made his modification. He would fix bugs, but the bugs would still remain. But then, whenever he went for help, one of the other students in the lab would sit down at the terminal, log in, and everything would work properly.

Poor Harry. It was a cruel trick. Somehow, though, everybody forgot to tell him about it. He soon grew frustrated with the whole enterprise, and eventually left school.

And you thought those random bugs in your system were there by accident?

Trusting Trust

Perhaps the definitive account of the problems inherent in computer security and trust is related in Ken Thompson's article, Reflections on Trusting Trust . 2 Thompson describes a back door planted in an early research version of UNIX.

The back door was a modification to the /bin/login program that would allow him to gain superuser access to the system at any time, even if his account had been deleted, by providing a predetermined username and password. While such a modification is easy to make, it's also an easy one to detect by looking at the computer's source code. So Thompson modified the computer's C compiler to detect if it was compiling the login.c program. If so, then the additional code for the back door would automatically be inserted into the object-code stream, even though the code was not present in the original C source file.

Thompson could now have the login.c program inspected by his coworkers, compile the program, install the /bin/login executable, and yet be assured that the back door was firmly in place.

But what if somebody inspected the source code for the C compiler itself? Thompson thought of that case as well. He further modified the C compilerso that it would detect whether it was compiling the source code for itself. If so, the compiler would automatically insert the special program recognition code. After one more round of compilation, Thompson was able to put all the original source code back in place.

Thompson's experiment was like a magic trick. There was no back door in the login.c source file and no back door in the source code for the C compiler, and yet there was a back door in both the final compiler and in the login program. Abracadabra!

What hidden actions do your compiler and login programs perform?

What the Superuser Can and Cannot Do

As all of these examples illustrate, technical expertise combined with superuser privileges on a computer is a powerful combination. Together, they let an attacker change the very nature of the computer's operating system. An attacker can modify the system to create hidden directories that don't show up under normal circumstances (if at all). Attackers can change the system clock, making it look as if the files that they modify today were actually modified months ago. An attacker can forge electronic mail. (Actually, anybody can forge electronic mail, but an attacker can do a better job of it.)

Of course, there are some things that an attacker cannot do, even if that attacker is a technical genius and has full access to your computer and its source code. An attacker cannot, for example, decrypt a message that has been encrypted with a perfect encryption algorithm. But he can alter the code to record the key the next time you type it. An attacker probably can't program your computer to perform mathematical calculations a dozen times faster than it currently does, although there are few security implications to doing so. Most attackers can't read the contents of a file after it's been written over with another file unless they take apart your computer and take the hard disk to a laboratory. However, an attacker with privileges can alter your system so that files you have deleted are still accessible (to him).

In each case, how do you tell if the attack has occurred?

The "what-if" scenario can be taken to considerable lengths. Consider an attacker who is attempting to hide a modification in a computer's /bin/login program: (See Table 27-1.)

Table 27-1

. The "What-If"Scenario

What the Attacker Might Do After Gaining Root Access

Your Responses

The attacker plants a back door in the /bin/login program to allow unauthorized access.

You use PGP to create a digital signature of
all system programs. You check the signatures every day.

The attacker modifies the version of PGP that you are using, so that it will report that the signature on /bin/login verifies, even if it doesn't.

You copy /bin/login onto another computer before verifying it with a trusted copy of PGP.

The attacker modifies your computer's kernel by adding loadable modules, so that when the /bin/loginis sent through a TCP connection, the original /bin/login, rather than the modified version, is sent.

You put a copy of PGP on a removable hard disk. You mount the hard disk to perform the signature verification and then unmount it. Furthermore, you put a good copy of /bin/login onto your removable hard disk and then copy the good program over the installed version on a regular basis.

The attacker regains control of your system and further modifies the kernel so that the modification to /bin/login is patched into the running program after it loads. Any attempt to read the contents of the /bin/login file results in the original, unmodified version.

You reinstall the entire system software, and configure the system to boot from a read-only device such as a CD-ROM.

Because the system now boots from a CD-ROM, you cannot easily update system software as bugs are discovered. The attacker waits for a bug to crop up in one of your installed programs, such as sendmail . When the bug is reported, the attacker will be ready to pounce.

Your move . . .

If you think that this description sounds like a game of chess, you're correct. Practical computer security is a series of actions and counteractions, of attacks and defenses. As with chess, success depends upon anticipating your opponent's moves and planning countermeasures ahead of time. Simply reacting to your opponent's moves is a recipe for failure.

The key thing to note, however, is that somewhere, at some level, you need to trust what you are working with. Maybe you trust the hardware. Maybe you trust the CD-ROM. But at some level, you need to trust what you have on hand. Perfect security isn't possible, so we need to settle for the next best thing -- reasonable trust on which to build.

The question is, where do you place that trust?

Can You Trust Your Suppliers?

Your computer does something suspicious. You discover that the modification dates on your system software have changed. It appears that an attacker has broken in, or that some kind of virus is spreading. So what do you do? You save your files to backup tapes, format your hard disks, and reinstall your computer's operating system and programs from the original distribution media.

Is this really the right plan? You can never know. Perhaps your problems were the result of a break-in. But sometimes, the worst is brought to you by the people who sold you your hardware and software in the first place.

Hardware Bugs

The fact that Intel Pentium processors had a floating-point problem that infrequently resulted in a significant loss of precision when performing some division operations was revealed to the public in 1994. Not only had Intel officials known about this, but apparently they had decided not to tell their customers until after there was significant negative public reaction.

Several vendors of disk drives have had problems with their products failing suddenly and catastrophically, sometimes within days of being placed in use. Other disk drives failed when they were used with UNIX, but not with the vendor's own proprietary operating system. The reason: UNIX did not run the necessary command to map out bad blocks on the media. Yet, these drives were widely bought for use with the UNIX operating system.

Furthermore, there are many cases of effective self-destruct sequences in various kinds of terminals and computers. For example, Digital's original VT100 terminal had an escape sequence that switched the terminal from a 60Hz refresh rate to a 50Hz refresh rate, and another escape sequence that switched it back. By repeatedly sending the two escape sequences to a VT100 terminal, a malicious programmer could cause the terminal's flyback transformer to burn out -- sometimes spectacularly!

A similar sequence of instructions could be used to break the monochrome monitor on the original IBM PC video display.

Viruses on the Distribution Disk

A few years ago, there was a presumption in the field of computer security that manufacturers who distributed computer software took the time and due diligence to ensure that their computer programs, if not free of bugs and defects, were at least free of computer viruses and glaring computer security holes. Users were warned not to run shareware and not to download programs from bulletin board systems, because such programs were likely to contain viruses or Trojan horses. Indeed, at least one company, which manufactured a shareware virus scanning program, made a small fortune telling the world that everybody else's shareware programs were potentially unsafe.

Time and experience have taught us otherwise.

In recent years, a few viruses have been distributed with shareware, but we have also seen many viruses distributed in shrink-wrapped programs. The viruses come from small companies, and from the makers of major computer systems. Even Microsoft distributed a CD-ROM with a virus hidden inside a macro for Microsoft Word. The Bureau of the Census distributed a CD-ROM with a virus on it. One of the problems posed by viruses on distribution disks is that many installation procedures require that the user disable any antiviral software that is running.

The mass-market software industry has also seen a problem with logic bombs and Trojan horses. For example, in 1994, Adobe distributed a version of a new version of Photoshop 3.0 for the Macintosh with a "time bomb" designed to make the program stop working at some point in the future; the time bomb had inadvertently been left in the program from the beta-testing cycle. Because commercial software is not distributed in source code form, you cannot inspect a program and tell if this kind of intentional bug is present or not.

Like shrink-wrapped programs, shareware is also a mixed bag. Some shareware sites have system administrators who are very conscientious, and who go to great pains to scan their software libraries with viral scanners before making them available for download. Other sites have no controls, and allow users to place files directly in the download libraries. In the spring of 1995, a program called PKZIP30.EXE made its way around a variety of FTP sites on the Internet and through America Online. This program appeared to be the 3.0 beta release of PKZIP , a popular DOS compression utility. But when the program was run, it erased the user's hard disk.

Buggy Software

Consider the following, rather typical, disclaimer on a piece of distributed software:

NO WARRANTY OF PERFORMANCE. THE PROGRAM AND ITS ASSOCIATED DOCUMENTATION ARE LICENSED "AS IS" WITHOUT WARRANTY AS TO THEIR PERFORMANCE, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE RESULTS AND PERFORMANCE OF THE PROGRAM IS ASSUMED BY YOU AND YOUR DISTRIBUTEES. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU AND YOUR DISTRIBUTEES (AND NOT THE VENDOR) ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING, REPAIR, OR CORRECTION.

Software sometimes has bugs. You install it on your disk, and under certain circumstances, it damages your files or returns incorrect results. The examples are legion. You may think that the software is infected with a virus -- it is certainly behaving as if it is infected with a virus but the problem is merely the result of poor programming.

If the creators and vendors of the software don't have confidence in their own software, why should you? If the vendors disclaim "... warranty as to [its] performance, merchantability, or fitness for any particular purpose," then why are you paying them money and using their software as a base for your business?

Unfortunately, quality is not a priority for most software vendors. In most cases, they license the software to you with a broad disclaimer of warranty (similar to the above) so there is little incentive for them to be sure that every bug has been eradicated before they go to market. The attitude is often one of "We'll fix it in the next release, after the customers have found all the bugs." Then they introduce new features with new bugs. Yet people wait in line at midnight to be the first to buy software that is full of bugs and may erase their disks when they try to install it.

Other bugs abound. Recall that the first study by Professor Barton Miller, cited in Chapter 23, found that more than one-third of common programs supplied by several UNIX vendors crashed or hung when they were tested with a trivial program that generated random input. Five years later, he reran the tests. The results? Although most vendors had improved to where "only" one-fourth of the programs crashed, one vendor's software exhibited a 46% failure rate! This failure rate was despite wide circulation and publication of the report, and despite the fact that Miller's team made the test code available for free to vendors.

Most frightening, the testing performed by Miller's group is one of the simplest, least-effective forms of testing that can be performed (random, black-box testing). Do vendors do any reasonable testing at all?

Consider the case of a software engineer from a major PC software vendor who came to Purdue to recruit in 1995. During his presentation, students reported that he stated that two of the top 10 reasons to work for his company were "You don't need to bother with that software engineering stuff -- you simply need to love to code" and "You'd rather write assembly code than test software." As you might expect, the company has developed a reputation for quality problems. What is surprising is that they continue to be a market leader, year after year, and that people continue to buy their software. 3

What's your vendor's policy about testing and good software engineering practices?

Or, consider the case of someone who implements security features without really understanding the "big picture." As we noted in "Picking a Random Seed" in Chapter 23, a sophisticated encryption algorithm was built into Netscape Navigator to protect credit card numbers in transit on the network. Unfortunately, the implementation used a weak initialization of the "random number" used to generate a system key. The result? Someone with an account on a client machine could easily obtain enough information to crack the key in a matter of seconds, using only a small program.

Hacker Challenges

Over the past decade, several vendors have issued public challenges stating that their systems are secure because they haven't been broken by "hacker challenges." Usually, these challenges involve some vendor putting its system on the Internet and inviting all comers to take a whack in return for some token prize. Then, after a few weeks or months, the vendor shuts down the site, proclaims their product invulnerable, and advertises the results as if they were a badge of honor. But c onsider the following:

  • Few such "challenges" are conducted using established testing techniques. They are ad hoc, random tests.
  • That no problems are found does not mean that no problems exist. The testers might not have exposed them yet. Or, the testers might not have recognized them. (Consider how often software is released with bugs, even after careful scrutiny.) Furthermore, how do you know that the testers will report what they find? In some cases, the information may be more valuable to the hackers later on, after the product has been sold to many customers because at that time, they'll have more profitable targets to pursue.
  • Simply because the vendor does not report a successful penetration does not mean that one did not occur -- the vendor may choose not to report it because it would reflect poorly on the product. Or, the vendor may not have recognized the penetration.
  • Challenges give potential miscreants some period to practice breaking the system without penalty. Challenges also give miscreants an excuse if they are caught trying to break into the system later (e.g., "We thought the contest was still going on.")
  • Seldom do the really good experts, on either side of the fence, participate in such exercises. Thus, anything done is usually done by amateurs. (The "honor" of having won the challenge is not sufficient to lure the good ones into the challenge. Think about it: good consultants can command fees of several thousand dollars per day in some cases -- why should they effectively donate their time and names for free advertising?)
  • Furthermore, the whole process sends the wrong messages: that we should build things and then try to break them (rather than building them right in the first place), or that there is some prestige or glory in breaking systems. We don't test the strengths of bridges by driving over them with a variety of cars and trucks to see if they fail, and pronounce them safe if no collapse occurs during the test.

    Some software designers could learn a lot from civil engineers. So might the rest of us: in ancient times, if a house fell or a bridge collapsed and injured someone, the engineer who designed it was crushed to death in the rubble as punishment!

    Next time you see an advertiser using a challenge to sell a product, you should ask if the challenger is really giving you more confidence in the product...or convincing you that the vendor doesn't have a clue as to how to really design and test security.

    If you think that a security challenge builds the right kind of trust, then get in touch with us. We have these magic pendants. No one wearing one has ever had a system broken into, despite challenges to all the computer users who happened to be around when the systems were developed. Thus, the pendants must be effective at keeping out hackers. We'll be happy to sell some to you. After all, we employ the same rigorous testing methodology as your security software vendors, so our product must be reliable, right?

    Security Bugs that Never Get Fixed

    There is also the question of legitimate software distributed by computer manufacturers that contains glaring security holes. More than a year after the release of sendmail Version 8, nearly every major UNIX vendor was still distributing its computers equipped with sendmail Version 5. (Versions 6 and 7 were interim releases that were never released.) While Version 8 had many improvements over Version 5, it also had many critical security patches. Was the unwillingness of UNIX vendors to adopt Version 8 negligence -- a demonstration of their laissez-faire attitude towards computer security, or merely a reflection of pressing market conditions? 4 Are the two really different?

    How about the case in which many vendors still release versions of TFTP that, by default, allow remote users to obtain copies of the password file? What about versions of RPC that allow users to spoof NFS by using proxy calls through the RPC system? What about software that includes a writable utmp file that enables a user to overwrite arbitrary system files? Each of these cases is a well-known security flaw. In each case, the vendors did not provide fixes for years -- even now, they may not be fixed.

    Many vendors say that computer security is not a high priority, because they are not convinced that spending more money on computer security will pay off for them. Computer companies are rightly concerned with the amount of money that they spend on computer security. Developing a more secure computer is an expensive proposition that not every customer may be willing to pay for. The same level of computer security may not be necessary for a server on the Internet as for a server behind a corporate firewall, or on a disconnected network. Furthermore, increased computer security will not automatically increase sales: firms that want security generally hire staff who are responsible for keeping systems secure; users who do not want (or do not understand) security are usually unwilling to pay for it at any price, and frequently disable security when it is provided.

    On the other hand, a computer company is far better equipped to safeguard the security of its operating system than is an individual user. One reason is that a computer company has access to the system's source code. A second reason is that most large companies can easily devote two or three people to assuring the security of their operating system, whereas most businesses are hard-pressed to devote even a single full-time employee to the job of computer security.

    We believe that computer users are beginning to see system security and software quality as distinguishing features, much in the way that they see usability, performance, and new functionality as features. When a person breaks into a computer, over the Internet or otherwise, the act reflects poorly on the maker of the software. We hope that computer companies will soon make software quality at least as important as new features.

    Network Providers that Network Too Well

    Network providers pose special challenges for businesses and individuals. By their nature, network providers have computers that connect directly to your computer network, placing the provider (or perhaps a rogue employee at the providing company) in an ideal position to launch an attack against your installation. For consumers, providers are usually in possession of confidential billing information belonging to the users. Some providers even have the ability to directly make charges to a user's credit card or to deduct funds from a user's bank account.

    Dan Geer, a Cambridge-based computer security professional, tells an interesting story about an investment brokerage firm that set up a series of direct IP connections between its clients' computers and the computers at the brokerage firm. The purpose of the links was to allow the clients to trade directly on the brokerage firm's computer system. But as the client firms were also competitors, the brokerage house equipped the link with a variety of sophisticated firewall systems.

    It turns out, says Geer, that although the firm had protected itself from its clients, it did not invest the time or money to protect the clients from each other. One of the firm's clients proceeded to use the direct connection to break into the system operated by another client. A significant amount of proprietary information was stolen before the intrusion was discovered.

    In another case, a series of articles appearing in The New York Times during the first few months of 1995 revealed how hacker Kevin Mitnick allegedly broke into a computer system operated by Netcom Communications. One of the things that Mitnick is alleged to have stolen was a complete copy of Netcom's client database, including the credit card numbers for more than 30,000 of Netcom's customers. Certainly, Netcom needed the credit card numbers to bill its customers for service. But why were they placed on a computer system that could be reached from the Internet? Why were they not encrypted?

    Think about all those services that are sprouting up on the World Wide Web. They claim to use all kinds of super encryption protocols to safeguard your credit card number as it is sent across the network. But remember you can reach their machines via the Internet to make the transaction. What kinds of safeguards do they have in place at their sites to protect all the card numbers after they're collected? If you saw an armored car transferring your bank's receipts to a "vault" housed in a cardboard box on a park bench, would the strength of the armored car cause you to trust the safety of the funds?

    Can You Trust People?

    Ultimately, people hack into computers. People delete files and alter system programs. People steal information. You should determine who you trust (and who you don't trust).

    Your Employees?

    Much of this book has been devoted to techniques that protect computer systems from attacks by outsiders. This focus isn't only our preoccupation: overwhelmingly, companies fear attacks from outsiders more than they fear attacks from the inside. Unfortunately, such fears are misplaced. Statistics compiled by the FBI and others show that the majority of major economic losses from computer crime appear to involve people on the "inside."

    Companies seem to fear attacks from outsiders more than insiders because they fear the unknown. Few managers want to believe that their employees would betray their bosses, or the company as a whole. Few businesses want to believe that their executives would sell themselves out to the competition. As a result, many organizations spend vast sums protecting themselves from external threats, but do little in the way of instituting controls and auditing to catch and prevent problems from the inside.

    Not protecting your organization against its own employees is a short-sighted policy. Protecting against insiders automatically buys an organization protection from outsiders as well. After all, what do outside attackers want most of all? They want an account on your computer, an account from which they can unobtrusively investigate your system and probe for vulnerabilities. Employes, executives, and other insiders already have this kind of access to your computers. And, according to recent computer industry surveys, attacks from outsiders and from rogue software account for only a small percentage of overall corporate losses; as many as 80% of attacks come from employees and former employees who are dishonest or disgruntled.

    No person in your organization should be placed in a position of absolute trust. Unfortunately, many organizations implicitly trust the person who runs the firm's computer systems. Increasingly, outside auditors are now taking a careful look at the policies and procedures in Information Systems support organizations -- making certain that backups are being performed, that employees are accountable for their actions, and that everybody operates within a framework of checks and balances.

    Your System Administrator?

    The threat of a dishonest system administrator should be obvious enough. After all, who knows better where all the goodies are kept, and where all the alarms are set? However, before you say that you trust your support staff, ask yourself a question: they may be honest, but are they competent?

    We know of a recent case in which a departmental server was thoroughly compromised by at least two different groups of hackers. The system administrator had no idea what had happened, probably because he wasn't very adept at UNIX system administration. How were the intruders eventually discovered? During a software audit, the system was revealed to be running software that was inconsistent with what should have been there. What should have been there was an old, unpatched version of the software. 5 Investigation revealed that hackers had apparently installed new versions of system commands to keep their environment up to date because the legitimate administrator wasn't doing the job.

    Essentially, the hackers were doing a better job of maintaining the machine than the hired staff was. The hackers were then using the machine to stage attacks against other computers on the Internet.

    In such cases, you probably have more to fear from incompetent staff than from some outsiders. After all, if the staff bungles the backups, reformats the disk drives, and then accidentally erases over the only good copies of data you have left, the data is as effectively destroyed as if a professional saboteur had hacked into the system and deleted it.

    Your Vendor?

    We heard about one case in which a field service technician for a major computer company was busy casing sites for later burglaries. He was shown into the building, was given unsupervised access to the equipment rooms, and was able to obtain alarm codes and door-lock combinations over time. When the thefts occurred, police were sure the crime was an inside job, but no one immediately realized how "inside" the technician had become.

    There are cases in which U.S. military and diplomatic personnel at overseas postings had computer problems and took their machines in to local service centers. When they got home, technicians discovered a wide variety of interesting and unauthorized additions to the circuitry.

    What about the software you get from the vendor? For instance, AT&T claims that Ken Thompson's compiler modifications (described earlier under "Trusting Trust") were never in any code that was shipped to customers. How do we know for sure? What's really in the code on your machines?

    Your Consultants?

    There are currently several people in the field of computer security consulting whose past is not quite sterling. These are people who have led major hacking rings, who brag about breaking into corporate and government computers, and who may have been indicted and prosecuted for computer crimes. Some of them have even done time in jail. Now they do security consulting and a few even use their past in advertising (although most do not).

    How trustworthy are these people? Who better to break into your computer system later on than the person who helped design the defenses? Think about this issue from a liability standpoint -- would you hire a confessed arsonist to install your fire alarm system, or a convicted pedophile to run your company day-care center? He'd certainly know what to protect the children against! What would your insurance company have to say about that? Your stockholders?

    Some security consultants are more than simply criminals; they are compulsive system hackers. Why should you believe that they are more trustworthy and have more self-control now than they did a few years ago?

    Bankers Mistrust

    Banks and financial institutions have notorious reputations for not reporting computer crimes. We have heard of cases in which bank personnel have traced active hacking attempts to a specific person, or developed evidence showing that someone had penetrated their systems, but they did not report these cases to the police for fear of the resulting publicity.

    In other cases, we've heard that bank personnel have paid people off to get them to stop their attacks and keep quiet. Some experts in the industry contend that major banks and trading houses are willing to tolerate a few million dollars in losses per week rather than suffer the perceived bad publicity about a computer theft. To them, a few million a week is less than the interest they make on investments over a few hours: it's below the noise threshold.

    Are these stories true? We don't know, but we haven't seen too many cases of banks reporting computer crimes, and we somehow don't think they are immune to attack. If anything, they're bigger targets. However, we do know that bankers tend to be conservative, and they worry that publicity about computer problems is bad for business.

    Odd, if true. Think about the fact that when some kid with a gun steals $1000 from the tellers at a branch office, the crime makes the evening news, pictures are in the newspaper, and a regional alert is issued. No one loses confidence in the bank. But if some hacker steals $5 million as the result of a bug in the software and a lack of ethics...

    Who do you entrust with your life's savings?

    If you are careful not to hire suspicious individuals, how about your service provider? Your maintenance organization? Your software vendor? The company hired to clean your offices at night? The temp service that provides you with replacements for your secretary when your secretary goes on leave? Potential computer criminals, and those with an unsavory past are as capable of putting on street clothes and holding down a regular job as anyone else. They don't have a scarlet "H" tattooed on their foreheads.

    Can you trust references for your hires or consultants? Consider the story, possibly apocryphal, of the consultant at the large bank who found a way to crack security and steal $5 million. He was caught by bank security personnel later, but they couldn't trace the money or discover how he did it. So he struck a deal with the bank: he'd return all but 10% of the money, forever remain silent about the theft, and reveal the flaw he exploited in return for no prosecution and a favorable letter of reference. The bank eagerly agreed, and wrote the loss off as an advertising and training expense. Of course, with the favorable letter, he quickly got a job at the next bank running the same software. After only a few such job changes, he was able to retire with a hefty savings account in Switzerland.

    Response Personnel?

    Your system has been hacked. You have a little information, but not much. If someone acts quickly, before logs at remote machines are erased, you might be able to identify the culprit. You get a phone call from someone claiming to be with the CERT-CC, or maybe the FBI. They tell you they learned from the administrator at another site that your systems might have been hacked. They tell you what to look for, then ask what you found on your own. They promise to follow right up on the leads you have and ask you to remain silent so as not to let on to the hackers that someone is hot on their trail. You never hear back from them, and later inquiries reveal that no one from the agency involved ever called you.

    Does this case sound farfetched? It shouldn't. Administrators at commercial sites, government sites, and even response teams have all received telephone calls from people who claim to be representatives of various agencies but who aren't. We've also heard that some of these same people have had their email intercepted, copied, and read on its way to their machines (usually, a hacked service provider or altered DNS record is all that is needed). The result? The social engineers working the phones have some additional background information that makes them sound all the more official.

    Who do you trust on the telephone when you get a call? Why?

    What All This Means

    We haven't presented the material in this chapter to induce paranoia in you, gentle reader. Instead, we want to get across the point that you need to consider carefully who and what you trust. If you have information or equipment that is of value to you, you need to think about the risks and dangers that might be out there. To have security means to trust, but that trust must be well placed.

    If you are protecting information that is worth a great deal, attackers may well be willing to invest significant time and resources to break your security. You may also think you don't have information that is worth a great deal; nevertheless, you are a target anyway. Why? Your site may be a convenient stepping stone to another, more valuable site. Or perhaps one of your users is storing information of great value that you don't know about. Or maybe you simply don't realize how much the information you have is actually worth. For instance, in the late 1980's, Soviet agents were willing to pay hundreds of thousands of dollars for copies of the VMS operating system source -- the same source that many site administrators kept in unlocked cabinets in public computer rooms.

    To trust, you need to be suspicious. Ask questions. Do background checks. Test code. Get written assurances. Don't allow disclaimers. Harbor a healthy suspicion of fortuitous coincidences (the FBI happening to call or that patch tape showing up by FedEx, hours after you discover someone trying to exploit a bug that the patch purports to fix). You don't need to go overboard, but remember that the best way to develop trust is to anticipate problems and attacks, and then test for them. Then test again, later. Don't let a routine convince you that no problems will occur.

    If you absorb everything we've written in this book, and apply it, you'll be way ahead of the game. However, this information is only the first part of a comprehensive security plan. You need to constantly be accumulating new information, studying your risks, and planning for the future. Complacency is one of the biggest dangers you can face. As we said at the beginning of the book, UNIX can be a secure system, but only if you understand it and deploy it in a monitored environment.

    You can trust us on that.


    Footnotes

    1. This is a simple enough bet -- sendmail seems to have an endless supply of bugs and design misfeatures leading to security problems.
    2. Communications of the ACM, volume 27, number 8, August 1984.
    3. Thesame company introduced a product that responded to a wrong password being typed three times in a row by prompting the user with something to the effect of, "You appear to have set your password to something too difficult to remember. Would you like to set it to something simpler?" Analysis of this approach is left as an exercise for the reader.
    4. Orwas the new, "improved" program simply too hard to configure? At least one vendor told us that it was.
    5. Actually,the very latest software shouldhave been there. However, the system administrator hadn't done any updates in more than two years, so we expected the old software to be in place.

    About the Authors

    Simson Garfinkel is a computer consultant, science writer, Contributing Writer at WIRED magazine, Editor-at-Large for Internet Underground , and Senior Editor at SunExpert magazine; he is also affiliated with many other magazines and newspapers. He is the author of PGP: Pretty Good Privacy (O'Reilly & Associates), NeXTStep Programming (Springer-Verlag), and The UNIX-Haters Handbook (IDG). Mr. Garfinkel writes frequently about science and technology, as well as their social impacts.

    Gene Spafford is on the faculty of the Department of Computer Sciences at Purdue University. He is the founder and director of the Computer Operations, Audit, and Security Technology ( COAST) Laboratory at Purdue, and is also associated with the Software Engineering Research Center (SERC) there. Professor Spafford is an active researcher in the areas of software testing and debugging, applied security, and professional computing issues. He was a participant in the effort to bring the Internet Worm under control; his published analyses of that incident are considered the definitive explanations. He was the consulting editor for Computer Crime: A Crime-fighter's Handbook (O'Reilly & Associates, 1995), and has also co-authored a widely praised book on computer viruses. He supervised the development of the first COPS and Tripwire security audit software packages, and he has been a frequently invited speaker at computer ethics and computer security events around the world. He is on numerous editorial and advisory boards, and is active in many professional societies, including ACM, Usenix, IEEE (as a Senior Member), and the IEEE Computer Society. He is involved with several working groups of the IFIP Technical Committee 11 on Security and Protection in Information Processing Systems.

    Colophon

    Our look is the result of reader comments, our own experimentation, and distribution channels. Distinctive covers complement our distinctive approach to technical topics, breathing personality and life into potentially dry subjects. UNIX and its attendant programs can be unruly beasts. Nutshell Handbooks help you tame them.

    The image featured on the cover of Practical UNIX and Internet Security is a safe. The concept of a safe has been with us for a long time. Methods for keeping valuables safely have been in use since the beginning of recorded history. The first physical structures that we think of as safes were developed by the Egyptians, Greeks, and Romans. These early safes were simply wooden boxes. In the Middle Ages and Renaissance in Europe, these wooden box safes started being reinforced with metal bands, and some were equipped with locks. The first all-metal safe was developed in France in 1820.

    Edie Freedman designed the cover of this book, using a 19th-century engraving from the Dover Pictorial Archive. The cover layout was produced with Quark XPress 3.3 using the ITC Garamond font.

    The inside layout was designed by Jennifer Niederst, with modifications by Nancy Priest, and implemented in FrameMaker 5.0 by Mike Sierra. The text and heading fonts are ITC Garamond Light and Garamond Book. The illustrations that appear in the book were created in Macromedia Freehand 5.0 by Chris Reilley.