RISKS-LIST: RISKS-FORUM Digest Friday, 22 January 1988 Volume 6 : Issue 11 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator Contents: Another One-Character Error (Earl Boebert) Safety in MIL-STD-2167A (Nancy Leveson) Brady Report on the Crash (Randall Davis) Data tampering, CTFC study of Major Market Index (Randy Oppenheimer) Court drops 'logic bomb' trial (John Pettitt) Official word on Social Security Numbers (Rob Austein) VAX/VMS security problem (Philip Taylor via Rob Gross) TimeWarps as an omen (Jeffrey R Kell) New Year's (Robert Slade) Time-chasing (Paul Fuqua) Re: New Year's Sun clock (Martin Ewing) [*** Lots more pending. Huge backlog. ***] The RISKS Forum is moderated. Contributions should be relevant, sound, in good taste, objective, coherent, concise, nonrepetitious. Diversity is welcome. Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM. > > > > > > > > > PLEASE LIST SUBJECT in SUBJECT: LINE. < < < < < < < < < For Vol i issue j, FTP SRI.COM, CD STRIPE:, GET RISKS-i.j. Volume summaries in (i, max j) = (1,46),(2,57),(3,92),(4,97),(5,85). ---------------------------------------------------------------------- Date: Mon, 18 Jan 88 17:16 EST From: Boebert@DOCKMASTER.ARPA Subject: Another One-Character Error The note about the Honeywell H800 that the Air Force dropped off the loading dock brought back this memory ... At the time of that incident, I was an EDP Officer at Hq Air Training Command. Our H800 shared a computer room with the Military Personnel Center, who had just moved the personnel records of all of the officers in the USAF onto mag tape files on a Burroughs B5000. The biggest job they ran was queries, which were written in a perverted first-order predicate calculus and asked questions like "which officers have specialty codes equal 'xxxx' and grade equal 'Captain'" and so forth. Individual records were pulled by the obvious query "which officers have Service Number equal 'xxxx'..." The program loaded a batch of queries into the B5000 and then passed the whole tape file against it, printing "hits" on line, giving a distinctive rhythm to the job: buzzzzchunkachunkabuzzzchunkabuzzzzzzzzzzzzchunkachunkabuzzz.... One Sunday I came in to play our favorite computer game (called "Beat the H800 Compiler" or "You Bet Your Project") and noticed that the B5000 next door was going: chunkachunkachunkachunkachunkachunkachunkachunkachunkachunka... so I went over and pulled rank on the airman who was running the job. Examination of the input showed that somebody had tried to select a specific record, but through clerical error had inserted a "not" sign before the "equal." Had I not intervened, this would have produced a truckload of paper containing every officer personnel record in the Air Force, except, of course, the one they were looking for. ------------------------------ Subject: Safety in MIL-STD-2167A [Safety in NUMBERS?] Date: Fri, 22 Jan 88 07:36:17 -0800 From: Nancy Leveson This may be a case of me being the last to know, but from a briefing on the new version of the DoD standard for software development (MIL-STD-2167 -- now called 2167A), I learned that one of the stated goals of the new version is to add safety requirements. To this end, a requirement has been added for the contractor "to conduct safety analysis to (a) minimize potential for hazardous conditions during the operational mission and (b) clearly identify and document hazards." There is also a provision added to the Software Requirements Specification DID to document safety requirements. Whatever one may think of such standards in general or of these particular safety requirements, including safety requirements in the software development standards is a step forward in awareness and concern. I have written a lot in various places about what I think should be done during software development in order to increase safety. It would be interesting to me to read more in this bulletin board about specific approaches that others might advocate. Given that you were in charge of a project to develop software to control a potentially dangerous system (e.g., a nuclear power plant, a medical device such as a linac, or an aircraft), what (if anything) special would you do to ensure acceptable safety? Or if you have already had such experiences, what have you done and did you think it was effective and adequate? Nancy Leveson, University of California, Irvine ------------------------------ Date: Sun 17 Jan 88 15:53-EST From: Randall Davis Subject: Brady Report on the Crash In view of the numerous discussions about the possible role of portfolio insurance strategies and technology in the market crash, consider these comments from/about the recently released Brady report. The conclusion is that those strategies and technology were largely not causes of the crash; there is as well a call for more use of information systems as an effective way of monitoring the markets and preventing problems in the future. (From a Boston Globe news analysis column 13 January 1988) Brady Panel Hits Mark on the Crash David Warsh The Brady Report is just back from the printers... its recommendations boil down to two basic strategies -- coordinate margin requiresments and establish circuit-breakers (coordinated trading halts and existing price limits).... For a survey done in 60 days, it's clear the panel ... has done an unusually good job in construing what happened. ... The analytic framework seems likely to withstand all subsequent attempts to alter it. The story that emerges confirms what has been previously reported. It wasn't ``Black Monday'' that was so bad, it was ``Terrible Tuesday,'' when the markets nearly closed that was the real shocker. And although they contributed a very substantial overhang of selling pressure that hit the market like a tidal wave on Monday morning, new-fangled trading strategies like portfolio insurance or index arbitrage did not ``cause'' the crash. If anything, various failures of the specialist system, in which 50 little-known firms commit themselves to buy and sell particular stocks in order to keep the market orderly, provided the biggest disappointments... In the end, the problem was in the market-mechanisms themselves, the record- keeping and emergency protocols which permitted a ``disentangling'' of the futures markets in Chicago and the share markets in NY. The recommendation that Brady later described as the ``strongest'' was the one that had the least to do with public regulation. It was that a unified clearing system be developed, linking the Chiago and NY markets, so that authorities and firms can constantly monitor the shifting action when turbulence strikes again. With better informatin systems, portfolio insurance and other hedging strategies would no longer pose an especially serious threat, the task force said. ... What we ought to be focusing on, said Brady, was ``technology, a market that's strung together by 300,000 television screens, where a trade in NY shows up on a screen in Tokyo 41 seconds later. We've got one market. We ought to be focusing on the problems associated with that.'' ------------------------------ Date: Wed, 20 Jan 88 10:01:40 EST From: Randy_Oppenheimer@IMG011.CEO.DG.COM Subject: Data tampering, CTFC study of Major Market Index Date: January 20, 1988 Organization: Data General The Wall Street Journal (1-7-88) carried a story examining whether the Major Market Index (MMI) was manipulated at a critical point during the stock market crash. The MMI is a little known futures contract index. According to the Journal, its "mysterious surge...may have saved the stock market from total meltdown." The gist of the story is that a study by the Commodity Futures Trading Commission (CFTC) determined there was no evidence of any manipulation. That finding, the Journal reported, immediately came under attack by various persons, who questioned even the data that the CFTC examined, claiming it may have been doctored. The Journal notes a congressional committee is now investigating allegations that the data used in the study may have been incorrect or tampered with before it was submitted to the CFTC. The Journal article concludes: "In Chicago, a spokesman for the Board of Trade, which supplied much of the data used by the CFTC, declined comment. A Board of Trade official familiar with the data said he is skeptical the data could have been tampered with, noting that it is computer-generated." ------------------------------ Date: Mon Jan 18 12:40:52 1988 From: pyrltd!jpp%mcvax@uunet.UU.NET From: jpp@slxsys.specialix.co.uk (John Pettitt) Subject: Court drops 'logic bomb' trial Organization: Specialix International, London, UK. Reproduced without permission from 'datalink' Monday 11 Jan 1988 James McMahon, the contract systems programmer accused of planting "logic bombs" in his client's computer systems, has been cleared of all charges. McMahon walked free from Isleworth Crown Court, London, late last month after the presiding judge Derek Holden accepted a mid-trial defence motion that the evidence against McMahon was inconsistent, incomplete and lacking in reliability. The ruling which focused on print-out and disk exhibits, promises to be a watershed in the history of computer law, influenceing the validity of such admissions if future cases. The trial was billed as the UK's first "logic bomb" case, with McMahon accused of planting unauthorised code in the DEC PDP 11 system software of air freight forwarder Pandair Freight. The prosecution claimed that one such "logic bomb" locked terminals at Pandair's Heston office, near Heathrow, and a second was set to wipe the memory of the companys Birmingham computer. John Pettitt, Specialix, Giggs Hill Rd, Thames Ditton, Surrey, England, KT7 0TR {backbone}!mcvax!ukc!pyrltd!slxsys!jpp jpp@slxsys.specialix.co.uk Tel: +44-1-398-9422 Fax: +44-1-398-7122 Telex: 918110 SPECIX G ------------------------------ Date: Tue 19 Jan 88 17:24:24-EST From: Rob Austein Subject: Official word on Social Security Numbers For what it's worth, here's the "official" story on SSNs, from a USENET posting by David Hawkins. I have not verified the quote. According to Social Security Administration Publication No. 05-10001 (Sept 86) DISCLOSING YOUR SOCIAL SECURITY NUMBER "Any Federal, State or local agency that asks for your Social Security number must tell you whether giving it is mandatory or voluntary, under what authority the number is being requested, and what uses will be made of it. Some non-governmental organizations also use Social Security numbers for recordkeeping purposes. Such use is neither required nor prohibited by Federal law. Although you are not required to give you number, the organization is not required to provide you service if you do not. Knowing your number does not allow these organizations to get information from your Social Security record." I don't know how this applies to semi-public entities like utility companies. Use of an SSN as a Driver's License ID number poses an interesting problem: the state government is presumably within the law in using your SSN as their internal ID number, but should they be printing it on your license? Seems kinda irresponsible. What if somebody steals this funny little piece of plastic that the goverment requires you to carry when you drive your car? In effect, the state government has just disclosed your SSN to your mugger. Sure, the mugger's the one who's breaking the law, but it's the state government's fault that you're carrying your SSN around with you when you're in the car. Maybe that's why hitchhiking is illegal in so many states? [:-)] Of course, in states where Driver's License ID number is different from SSN, you simply have two ID numbers that are demanded of you at different times; they're both required for "normal life". Not much of an improvement. ------------------------------ Date: Thu, 21 Jan 88 16:54 EST From: (Rob Gross) Subject: VAX/VMS security problem Organization: Boston College The following was recently posted to the INFO-VAX mailing list: Date: Tue, 19 Jan 88 12:08:50 GMT Reply-To: "RHBNC, Univ of London Philip Taylor" Sender: INFO-VAX Discussion From: CHAA006%vaxb.rhbnc.ac.uk@NSS.Cs.Ucl.AC.UK Subject: VMS security I believe I have discovered a serious loophole in VMS security. If breakin- detection is in force, and a user enters his/her username incorrectly, without noticing the error, then enters the correct password, that password can appear on the operator console and in the operators' log. This occurs when the same, incorrect, username is entered sufficient times for breakin-detection to become activated. As it is not unknown for system managers to reduce the detection limit to two, the appearance of such passwords, in clear, is a distinct possibility. For example, a user changes his/her password; later, on logging-in, mis-types the username (but doesn't notice the fact), and enters the old password; sees "Invalid username/password", and remembers that he/she has a new password; uses / to recall the username (to save re-typing it), then enters the new, correct, password. Breakin-detection is set at two, and the correct password, plus the username with perhaps a single error in it, appear in clear. An unlikely scenario ? Well, it happened to me, yesterday ! Since for common privileged usernames such as SYSTEM, it would typically be the work of a moment to guess the mis-typed username, system security can be seriously compromised. Furthermore, anything which results in a valid password being stored and displayed in clear is a serious breach of the zeroth rule of system security. ** Phil. ------------------------------ Date: Tue, 19 Jan 88 14:30:21 EDT From: Jeffrey R Kell Subject: TimeWarps as an omen After reading through yet another year's assortment of clock-related bugs an ominous realization of the scope of the Star Wars critique by David Parnas came to light. Every year we hear of clock-related bugs, even more so during leap years, and may the bits beware on 1 Jan 2000. Here we have a relatively trivial, extremely well-defined task of rolling over a clock to update a year. In the extremely simplistic case of mere changes of minutes, hours, or day, there are enough "real" cases to give a real test, and find (most) bugs. But for more extreme cases, the testing is done through 'simulations' and you simply are not dealing with the real events; it is extremely difficult to test in the actual environment. Very few of us, I doubt, would actually bother to repeatedly reboot a real system with the test time placed in the real clock to see if it works. The problem is not with "inexperienced Mickey Mouse" programmers either. Look at the IBM 3090's that called in for service due to a bug in the clock routine during the system's early days, or the Sun problems, or any other of the nightmares that appeared in Risks. Many were people that "should have known better" or "should have tested more thoroughly." If we are unable to keep a clock/calendar operating correctly, how can we possibly presume that a massively complex, ill-defined system like SDI can work, combined with the impossibility of a real-life test environment? If SDI is completed, and we must use it, and it fails, we won't have to bother with clock-setting algorithms any longer. Jeffrey R Kell, Dir Tech Services, Univ of Tennessee at Chattanooga [It is always tempting to conclude that if such a simple thing cannot be done correctly, then how can 10 million lines of code work adequately? This is a debate that has no end, although maybe we are ready to go around again on SDI, a subject that has received considerable discussion in earlier volumes of RISKS! Nevertheless, the moral of the story is clear -- the more complex the system, the greater the attention that must be paid to it, from the overall design down to the minute details... PGN] ------------------------------ Date: Thu, 21 Jan 88 07:56:28 PST From: Robert_Slade@mtsg.ubc.ca Subject: New Year's With regard to the computers dying over New Year's, my father in law just came up with a real oddball. He was using Appleworks, patched to take advantage of extra memory, a clock card, and a few other goodies. After the Christmas holidays, his system no longer fired up automatically, and instead had to be babied to get it to work. The final answer turned out to be in ProDos, a currently popular operating system for the Apple that has superceded Apple's own DOS 3.3. A number of clock cards for the Apple (my father in law's not being among them) do not store the year. ProDos very kindly calculates the year from the month, day, and day of the week. The tables for doing this, however, are limited, and one of the anniversary dates for early versions was 1988. Later versions will fail in future years... ------------------------------ Date: Thu, 21 Jan 88 22:07:09 CST From: Paul Fuqua Sender: pf%mips.csc.ti.com@RELAY.CS.NET Subject: Time-chasing and SSNs I had some fun chasing a computer-clock problem a couple of years ago. At that time we had six or seven Symbolics 3600 lispms, which initialise their real-time clocks at boot time by broadcasting a request for the time on the local Ethernet. The machines were divided between rooms on the first and third floors of the building. I noticed one day that the machine I sat down at had a wildly inaccurate time. Fortunately, the time-initialising code records the machine from which it received its response; it can be important to track down bad time sources. I checked the record, and trotted downstairs to discover that the second machine was similarly inaccurate; its response had come from a third machine, upstairs. The conclusion to this story may be obvious: I ran up and down the stairs several times, and discovered that the last machine had received its time response from the first! I ended up setting the time by hand. [It should be noted that more recent software manages to ask only reliable time-servers for the time.] Paul Fuqua, Texas Instruments Computer Science Center, Dallas, Texas CSNet: pf@csc.ti.com or pf@ti-csl UUCP: {smu, texsun, im4u, rice}!ti-csl!pf ------------------------------ Date: Mon, 18 Jan 88 14:35:36 PST From: msesys@DEImos.Caltech.Edu (Martin Ewing) Subject: Re: New Year's Sun clock On the subject of the Sun/new year's clock problem (cf Rayan Zachariasen), which turned out to result from a mistaken use of C expression side-effects. >...many many people were thinking of the careless programmer >with warm, sizzling, thoughts. Personally, I'd reserve a number of "warm, sizzling, thoughts" for the people who brought us C and Unix, who made this sort of mistake almost inevitable. [This message is similar to other RISKS submissions that I have rejected in the past. I include this one as representative of the others, but with a serious comment: In this field YOU ARE ALWAYS AT RISK. If RISKS tells you nothing else, it is KNOW AND UNDERSTAND YOUR RISKS. A comment on UNIX and C: Ken Thompson is one of the most brilliant designers and programmers ever to grace this earth. He developed UNIX and C primarily for his own pleasure. It is not HIS FAULT that UNIX is so widely used (e.g., because of its delightful facilities for program development and ease of adaptation), or -- by extension -- that it is used unwisely in hostile environments despite its not having addressed critical security concerns. A similar argument could be made by people who blindly accept free software from a BBOARD (e.g., the PC graphics ARF-ARF Trojan horse) or a Trojan horsey virus, and then complain when it destroys all their files. There are very complex tradeoffs among simplicity and ease of use on one hand, and safe systems (in a generalized sense) on the other hand. Know your requirements before you start designing, programming, or simply using a computer system. PGN] ------------------------------ End of RISKS-FORUM Digest ************************