RISKS-LIST: RISKS-FORUM Digest Monday 4 April 1988 Volume 6 : Issue 54 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator Contents: Re: April Fool's Warning from Usenet (Gene Spafford) Intolerant Fault-Tolerance (Jerome H. Saltzer) How Computers Get Your Goat (PGN) Old viruses (Jerry Leichter) Re: Notifying users of security problems (Andy Goldstein) The "previous account" referred to in RISKS-6.51 (Les Earnest) Just Another Unix Spoof (Paul Cudney) The RISKS Forum is moderated. Contributions should be relevant, sound, in good taste, objective, coherent, concise, nonrepetitious. Diversity is welcome. Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM. For Vol i issue j, FTP SRI.COM, CD STRIPE:, GET RISKS-i.j. Volume summaries in (i, max j) = (1,46),(2,57),(3,92),(4,97),(5,85). ---------------------------------------------------------------------- From: spaf@purdue.edu (Gene Spafford) Subject: Re: April Fool's Warning from Usenet (RISKS-6.52) Date: 4 Apr 88 23:34:57 GMT In Risks 6.52, Cliff Stoll forwarded a posting on the Usenet about forged articles. He attributed it to me, and unfortunately either Cliff or Peter trimmed most of the news header lines out. Why was it unfortunate? Because the article was itself a forgery, and the headers exhibited all of the indicators the posting warned were in bogus articles! It was a marvelous joke except for the fact I've gotten about 40 mail messages so far from people who didn't realize that it was a forgery. Now it shows up in Risks! I am 99% certain who did it, and I can't wait for next April 1.... Gene Spafford NSF/Purdue/U of Florida Software Engineering Research Center, Dept. of Computer Sciences, Purdue University, W. Lafayette IN 47907-2004 Internet: spaf@cs.purdue.edu uucp: ...!{decwrl,gatech,ucbvax}!purdue!spaf [Mortifications from the Moderator, who tries to keep RISKS Readable by Hewing Headers. In this case I should have left the entire sequence in, to add to the evidence described in the message. Very clever. PGN] ------------------------------ Date: Sun, 3 Apr 88 14:10:38 EST Subject: Intolerant Fault-Tolerance (RISKS-6.53) From: Jerome H. Saltzer > From: mann (Tim Mann) . . . > This incident teaches two lessons about engineering a fault-tolerant system, > neither of which should come as a surprise. First, a fault-tolerant system > must report the faults it tolerates ... @begin(Soapbox) This lesson reported by Tim Mann bears underlining. I would guess that the single design mistake I have seen repeated most often in 25 years of observing computer systems is that one: providing what appears to be fault tolerance, but neglecting to provide a means of reporting when a fault has been encountered and successfully masked. As a result, many so-called fault-tolerant systems actually run for much of their lifetime in a state where a single fault will bring them down. A redundant system that is thought to be operating well back from the edge of a catastrophe cliff may actually be standing on the edge without its users or operators being aware. Examples are legion, even in the systems we use every day: - backup tapes with write errors undiscovered till they are needed. - packets mysteriously lost in the Ethernet or the gateway; the higher-level protocol successfully retries. - the non-responding internet name server; the next one in the list responds. - mail links that are down more often than up; but up enough that the mail usually gets through. A system that provides redundancy but omits any mechanism to call for repair when the redundancy is invoked is more complex and expensive than a non-redundant system, but it ISN'T really fault-tolerant. A closely related problem occurs when a fault-tolerant system with properly engineered fault reporting is operated in a mode where its calls for help get ignored or given such low priority that they might as well not be there. The blame in this case isn't with the original engineers (unless they buried the calls for help in the middle of a log full of uninteresting events), but the RISK is the same. Next time someone shows off a "fault-tolerant" system that seems to be able to survive having a 45-caliber slug fired through one component, as part of the demo ask to see the trouble report that the system generated in response to the incident. If there isn't one, take your business elsewhere. @end(Soapbox) Jerry Saltzer ------------------------------ Date: Sat 2 Apr 88 10:46:01-PST From: Peter G. Neumann Subject: ``How Computers Get Your Goat'' Anyone who has used a personal computer has been forced to wait while the machine completed some task. A University of Texas at Arlington researcher has found that such waiting can produce anxiety and theorizes that such anxiety reduces productivity. The researcher, Jan L. Guynes, used psychological tests to classify 86 volunteers as either Type A or Type B personalities. The volunteers were given 20 minutes to make editing corrections on text in a personal computer, in which delays were programmed. She found that a slow, unpredictable computer increased anxiety in both groups equally, even though Type A personalities were generally more anxious before undertaking the editing task, and that such added anxiety may affect performance. New York Times item, from the SF Chronicle, 30 March 1988, p. A3. ------------------------------ Date: Mon, 4 Apr 88 17:31 EST From: "Jerry Leichter (LEICHTER-JERRY@CS.YALE.EDU)" Subject: Old viruses In a recent RISKS, Bill Kennedy mentions a program he saw on an IBM 360 back in 1974 which submitted multiple copies of itself. This rang a bell; I remember hearing talk of a similar program at Princeton. Since I graduated in 1973, the idea goes back at least that far. No one claimed to have actually run it themselves - it was always something they had heard about someone else doing. But the possibility was certainly understood, and I recall discussions about the consequences, and speculations about how many copies of itself the program should submit for maximum effect. (Anything more than an average of one was certain to clog the system eventually; but you could modulate how long it would take for the system to slowly grind to a halt.) There was also discussion of counter-moves. Given the way OS/360 and ASP were structured, the best approach we could come up with was to remove the account the program - just as in Bill Kennedy's story usually named "RABBIT" or "RABBITS" - was running in. Given the general insecurity of OS/360, however, it wasn't hard to get different copies of RABBITS to run under many different accounts. Such a RABBITS program could be quite difficult and expensive to kill - clearing out the queues on a batch-oriented system is not something to be done lightly! BTW, the great-granddaddy of all such programs wasn't a virus at all - it arose innocently as a bug in some early version of OS/360. The exact details are lost in the mists of time, but they went something like this: If your program abend'ed (aborted), you got a post-mortem dump. Some sort of job setting requested that the dump be printed. Often, you could quickly determine that the dump was of no interest. So you asked the operator to kill the print job. Unfortunately, the "print dump on abend" switch stayed on: Killing the print job lead to a post-mortem dump.... -- Jerry ------------------------------ Date: Mon, 4 Apr 88 15:28:53 PDT From: goldstein%star.DEC@decwrl.dec.com (Andy Goldstein) Subject: Re: Notifying users of security problems > Date: 31 Mar 88 01:25:29 PST (Thursday) > From: "hugh_davies.WGC1RX"@Xerox.COM > Subject: Re: Notifying users of security problems > In RISKS 6.50, Andy Goldstein (goldstein%star.DEC@decwrl.dec.com) states.. > "Sending out notice of the presence of a bug without a correction or > workaround is of course even more irresponsible." ... > When I first saw this I couldn't believe what I was reading. ... > Please, Andy, tell me I've got it wrong! Maybe you did misunderstand me; I should have been more precise in the statement you quoted. I was referring specifically to security bugs. That said, I stand by my statement. Let me try to explain... When a piece of software is shipped containing a bug, knowledge of that bug is contained in the software, in a manner of speaking. At the same time, in most cases knowledge of the bug is not held by any person. That is, the bug was created inadvertantly and unknowingly by the author(s) of the software, and no one has discovered it yet. A bug does its damage when it is somehow invoked, by use or misuse of a certain feature, or by the unhappy confluence of certain conditions. By and large, ordinary bugs are encountered by users innocently going about their business. That is, no prior knowledge of the bug by the user is involved in encountering the bug; knowledge of the bug by the system is sufficient. Furthermore, the effect of the bug is in general to cause system behavior which is undesirable to the user. Consequently, knowledge of the bug will often permit the user to work around it or defend against it. Since a virus spreads without knowledge of the user, it too falls into this category. Sharing information about most types of bugs, including the existence and nature of particular viruses, is productive and worthwhile. Now let us compare security bugs to ordinary bugs. I define a security bug as one which permits a user to violate a system's security controls in some significant way (e.g., allowing an ordinary user to become superuser or whatever). Security bugs are by and large not encountered by people innocently going about their business. They are usually found by the adventurous by inspecting system sources, and are invoked only through creative abuse of obscure system features. (I cannot argue this point with logic, but many years of experience dealing with security bugs tell me it is so.) Most system users (I mean users, not administrators) do not care about security bugs. They do not stand in the way of their getting their work done. The people who care about security bugs are hackers (and of course the system managers trying to fend them off). From the point of view of the hacker, a security bug is an undocumented feature of the system that allows him to do what he wants to do. So we get to the critical distinction between security bugs and others: Because invocation of a security bug requires a deliberate, unusual action, a security bug is only harmful to an installation when malicious users gain knowledge of the bug. The best analogy I can think of is a lock manufacturer discovering that one of its locks can be easily picked using a previously unknown technique. The challenge we have with security bugs, therefore, is (1) not shipping them to begin with (2) fixing them as promptly as possible when they are discovered (3) keeping knowledge out of the hands of the bad guys until they can be fixed. Points (1) and (2) are of course mere matters of engineering, manufacturing and distribution. Because we will never achieve instantaneous development and distribution of bug fixes, (3) is the kicker. I have heard many arguments that system managers should be permitted to learn about security bugs, either from the manufacturer or informally via the grapevine. With respect to the VAX/VMS user community, I disagree with this conclusion for several reasons: (1) The knowledge won't do them any good. We are long past the time when every computer installation had its wizard who knew (or thought he knew) how to fix every problem that might come up. [Digression: I'm sure half the university system managers have just hit the ceiling. Universities are unique in having available a large pool of cheap, highly talented labor. Among our engineering and commercial customers, technically skilled labor is expensive and hard to come by. Our working assumption is that the majority of our customer base does not, or would rather not have to, understand the internals of VMS to use it.] (2) The news may do them harm. Would you, as DP manager of Bank of America, install a "security patch" that originated from, say, UC Berkeley? (3) The knowledge may do them harm. Nowadays, any fairly well-off high school kid can buy himself a microvax and be a bona-fide system manager. There is no practical way to tell the good guys from the bad guys anymore. The larger the number of people know of the existence of a security problem, the more likely it is that a bad guy will gain the necessary knowledge to exploit it. Consequently, DEC has taken the following approach to dealing with security bugs in the future: (1) When a security bug is discovered, engineering will develop a fix as rapidly as possible. The fix will be distributed to customers as rapidly as circumstances warrant. To the extent possible, the fix will be constructed so as to make it difficult to reverse-engineer the bug from the fix. (2) Once the fix has been distributed, all customers will be notified of the existence of the problem and informed of the urgency of installing the fix. Thus we let the cat out of the bag (hopefully) only after users have been given the tools with which to skin it. While this policy of secrecy does carry the possibility that a small number of users may incur duplicated effort investigating a security bug, we feel this is a worthwhile trade towards ensuring the safety of the majority of the customer base. I also emphasize that this policy applies only to security bugs that have no operational workaround. Andy Goldstein, VMS Development ------------------------------ Date: 01 Apr 88 1620 PST From: Les Earnest Subject: The "previous account" referred to in RISKS-6.51 e-t-a-o-n-r-i Spy and the F.B.I. Reading a book got me into early trouble -- I had an F.B.I. record by age twelve. This bizarre incident caused a problem much later when I needed a security clearance. I learned that I could obtain one only by concealing my sordid past. A friend named Bob and I read the book "Secret and Urgent," by Fletcher Pratt [Blue Ribbon Books; Garden City, NY; 1942] which was an early popular account of codes and ciphers. Pratt showed how to use letter frequencies to break ciphers and reported that the most frequently occurring letters in typical English text are e-t-a-o-n-r-i, in that order. (The letter frequency order of the story you are now reading is e-t-a-i-o-n-r. The higher frequency of "i" probably reflects the fact that _I_ use the first person singular a lot.) Pratt's book also treated more advanced cryptographic schemes. Bob and I decided that we needed to have a secure way to communicate with each other, so we put together a rather elaborate jargon code based on the principles described in the book. I don't remember exactly why we thought we needed it -- we spent much of our time outside of school together, so there was ample time to talk privately. Still, you never could tell when you might need to send a secret message! We made two copies of the code key (a description of how to encrypt and decrypt our messages) in the form of a single typewritten sheet. We each took a copy and carried it on our persons at all times when we were wearing clothes. I actually didn't wear clothes much. I spent nearly all my time outside school wearing just a baggy pair of maroon swimming trunks. That wasn't considered too weird in San Diego. I had recently been given glasses to wear but generally kept them in a hard case in the pocket of the trousers that I wore to school. I figured that this was a good place to hide my copy of the code key, so I carefully folded it to one-eighth of its original size and stuck it at the bottom of the case, under my glasses. Every chance I got, I went body surfing at Old Mission Beach. I usually went by streetcar and, since I had to transfer Downtown, I wore clothes. Unfortunately, while I was riding the trolley home from the beach one Saturday, the case carrying my glasses slipped out of my pocket unnoticed. I reported the loss to my mother that night. She chastised me and later called the streetcar company. They said that the glasses hadn't been turned in. After a few weeks of waiting in vain for the glasses to turn up, we began to lose hope. My mother didn't rush getting replacement glasses in view of the fact that I hadn't worn them much and they cost about $8, a large sum at that time. (To me, $8 represented 40 round trips to the beach by streetcar, or 80 admission fees to the movies.) Unknown to us, the case had been found by a patriotic citizen who opened it, discovered the code key, recognized that it must belong to a Japanese spy and turned it over to the F.B.I. This was in 1943, just after citizens of Japanese descent had been forced off their property and taken away to concentration camps. I remember hearing that a local grocer was secretly a Colonel in the Japanese Army and had hidden his uniform in the back of his store. A lot of people actually believed these things. About six weeks later, when I happened to be off on another escapade, my mother was visited by a man who identified himself as an investigator from the F.B.I. (She was a school administrator, but happened to be at home working on her Ph.D. dissertation.) She noticed that there were two more men waiting in a car outside. The agent asked a number of questions about me, including my occupation. He reportedly was quite disappointed when he learned that I was only 12 years old. He eventually revealed why I was being investigated, showed my mother the glasses and the code key and asked her if she knew where it came from. She didn't, of course. She asked if we could get the glasses back and he agreed. My mother told the investigator how glad she was to get them back, considering that they cost $8. He did a slow burn, then said "Lady, this case has cost the government thousands of dollars. It has been the top priority in our office for the last six weeks. We traced the glasses to your son from the prescription by examining the files of nearly every optometrist in San Diego." It apparently didn't occur to them that if I were a REAL Japanese spy, I might have brought the glasses with me from headquarters. The F.B.I. agent gave back the glasses but kept the code key "for our records." They apparently were not fully convinced that they were dealing just with kids. Since our communication scheme had been compromised, Bob and I devised a new key. I started carrying it in my wallet, which I thought was more secure. I don't remember ever exchanging any cryptographic messages. I was always ready, though. A few years later when I was in college, I got a summer job at the Naval Electronics Lab, which required a security clearance. One of the questions on the application form was "Have you ever been investigated by the F.B.I." Naturally, I checked "Yes." The next question was, "If so, describe the circumstances." There was very little space on the form, so I answered simply and honestly, "I was suspected of being a Japanese spy." When I handed the form in to the security officer, he scanned it quickly, looked me over slowly, then said, "Explain this" -- pointing at the F.B.I. question. I described what had happened. He got very agitated, picked up my form, tore it in pieces, and threw it in the waste basket. He then got out a blank form and handed it to me, saying "Here, fill it out again and don't mention that. If you do, I'll make sure that you NEVER get a security clearance." I did as he directed and was shortly granted the clearance. I never again disclosed that incident on security clearance forms. On another occasion much later, I learned by chance that putting certain provocative information on a security clearance form can greatly speed up the clearance process. But that is another story. Les Earnest ------------------------------ Date: Sat, 2 Apr 88 13:19:08 PST From: cudney@sm.unisys.com (Paul Cudney) Subject: Just Another Unix Spoof -- ISO abandoned Are you engaging in fault-encouragement behavior? If so, here is Just Another Unix Spoof (JAUS) to chew on. /Paul Path: sdcrdcf!ucla-cs!rutgers!bellcore!faline!thumper!kremvax!meese From: meese@kremvax.arpa Newsgroups: comp.protocols.tcp-ip,comp.protocols.iso Subject: OSI abandoned! Message-ID: <880401@kremvax.arpa> Date: 1 Apr 88 00:00:01 GMT Organization: Soviet Sanctuary for Victims of American Persecution Posted: Fri Apr 1 00:00:01 1988 WASHINGTON -- In a simultaneous announcement that took the computer industry by surprise, OSI leaders today said that they were abandoning their effort to promote the OSI Protocol Suite in favor of the existing US Department of Defense (DoD) ARPANET Protocol Suite. The official reason cited for the decison was a new report from the Office of Technology Assessment stating that the manpower required to fully implement and test even the few OSI protocols that are now defined would consume the entire output of American university computer science programs for the rest of the century, and that printing and distributing the necessary protocol specifications would consume the entire American and Canadian paper supplies for the next five years. However, one high-placed source speaking on condition of anonymity said, ``The whole OSI thing was a practical joke one of the guys cooked up a few years ago. Nobody ever expected anybody to take it seriously. I mean, who would believe an organization supposedly dedicated to tearing down barriers to free and open communications between computers when it's run by a former director of the National Security Agency? I guess computer people are a lot more gullible than we thought. We kept dropping hints, making the whole thing more and more ridiculous. We hoped that people would eventually catch on, but it didn't work. Finally, our consciences got to us.'' In related news, officials at the Mitre Corporation in Bedford, Massachussetts reported that one of their employees, as yet publicly unidentified, froze ``as solid as stone'' when he heard the announcement. Medical experts have as yet been unable to communicate with the victim or get him to relax his facial muscles, which are reportedly locked into what was described as an ``enormous grin''. AP-NR-04-01-88 0001EST ------------------------------ End of RISKS-FORUM Digest ************************