Subject: RISKS DIGEST 9.81 REPLY-TO: risks@csl.sri.com RISKS-LIST: RISKS-FORUM Digest Wednesday 18 April 1990 Volume 9 : Issue 81 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator Contents: RISKS, SENDMAIL, and YOU! (PGN) London Tube train leaves ... without its driver (Stephen Page) Shuttle roll incident on January '90 mission (Henry Spencer) Software failures on Boeing 747-400? (Trevor Warwick) False 1099 forms (Phil R. Karn) Re: Risks [9.080] of Daylight Savings Time (Thomas Zmudzinski, Chuck Weinstock) Comment on UK Software Standards (Richard Morton) [RISKS-9.1 and 2] Automates Fast Food (David Bank) The RISKS Forum is moderated. Contributions should be relevant, sound, in good taste, objective, coherent, concise, and nonrepetitious. Diversity is welcome. CONTRIBUTIONS to RISKS@CSL.SRI.COM, with relevant, substantive "Subject:" line (otherwise they may be ignored). REQUESTS to RISKS-Request@CSL.SRI.COM. TO FTP VOL i ISSUE j: ftp CRVAX.sri.comlogin anonymousAnyNonNullPW cd sys$user2:[risks]get risks-i.j . Vol summaries now in risks-i.0 (j=0) ---------------------------------------------------------------------- Date: Tue, 17 Apr 1990 10:11:36 PDT From: "Peter G. Neumann" Subject: RISKS, SENDMAIL, and YOU! If you are wondering why you have not seen many issues lately, 1. I was travelling for almost two weeks, and have been utterly swamped trying to recover since I returned. 2. The SENDMAIL multiple-mailing problem has continued to bug us, hitting a DIFFERENT one of the six main SUBLISTS on the average of EVERY OTHER ISSUE. Thus, to try to minimize your agony and my annoyance, I have favored fewer but larger issues. (Yes, that could be a part of the problem, too. However, in general, it appears that the trouble has usually been due to the combination of noncompliant remote hosts and nameserver problems. I've also had to drop a few addresses that were hanging in a name server loop! 3. At long last, we are about to install a new version of SENDMAIL for which claims are made that it improves some things, but also continues to handle awkward addresses properly and does some of the other things we need, preventing us from backing off to an earlier warhorse version. Perhaps some of the old problems will now go away, although I shudder at the thought that we may now have to discover some new problems. At any rate, please bear with us. 4. We still have a lot of .ARPA addresses, even though the ARPANET is essentially kaput. I trust those of you will provide updated addresses before it is too late. Let's not wait until the address grace period expires. Thanks for your patience. PGN ------------------------------ Date: Mon, 16 Apr 90 14:39:04 bst From: Stephen Page Subject: [London] Tube train leaves ... without its driver [an article by Dick Murray in the [London] Evening Standard under the above title, 12 April 1990.] The Runaway train went down the track -- leaving an embarrassed Victoria Line Tube driver standing behind on the platform. He had broken a golden rule and left the cab of his fully-automated train to check a door which had failed to close properly. When the door did shut an electrical circuit was completed and the train, with 20 passengers on board, moved off before the driver had time to rush back to the controls. A London Underground spokesman, stressing that there was no danger to any of the passengers, said today: `He did not turn off the automatic-mode switch before leaving the cab.' The driver, whom Underground chiefs refused to name, then had to wait on the platform for six minutes before he could get on the Tube [train] behind to catch up with his own train. Passengers did not know what had happened, except there was nobody to open the doors at Pimlico until the inspector let them out. Now a full-scale inquiry into the incident, at 11.20pm on Tuesday, has been launched by London Underground and the driver could face a disciplinary charge. [ Readers who have visited London may sympathise with my own theory, which is that the train was so astonished to be less than 400% full that it bolted. - S ] [Yes, the normal situation is "tuBe or not tuBe, that is congestion." But he was certainly optimistic to think that he could CATCH UP with his train, since the automatic controls clearly are designed to prevent that. A better strategy would have been to call ahead. But I presume that the supervisor commandeered a substitute driver anyway. Actually, the train is not COMPLETELY automatic; the opening and closing of doors is controlled manually. But perhaps there is an interlock that keeps the train from taking off again after a time-out without the doors having been opened? Otherwise it might just have kept on going. Seems like a good subject for the Kingston Trio. Headless horseman? Horseless headman? Watch the driver putter? PGN] ------------------------------ Date: Fri, 13 Apr 90 22:16:19 EDT From: henry@zoo.toronto.edu Subject: Shuttle roll incident on January '90 mission As many people know, during the LDEF retrieval mission in January 1990, at one point when the crew were asleep, a garbled "state vector" uploaded to Columbia caused the orbiter to start rotating. The extent of this has been somewhat exaggerated -- maximum speed was half an RPM, and the crew didn't notice until Mission Control woke them up -- but it was a bit startling that such a thing could happen. It could have been serious if it had happened at a worse time. Most software people, on hearing about, react with "haven't those clods ever heard of checksums?". Well, it turns out they have. In the latest issue of World Spaceflight News (an excellent source for serious technical detail on shuttle flights), the full story is given. The telemetry channels were noisy at the time, the state vector was garbled by several noise "hits", and Mission Control's computers correctly announced that the copy sent back by the orbiter for confirmation didn't match the original and the state vector should be discarded. The ground controller responsible for the matter examined the detailed report, incorrectly decided that nothing important had been damaged -- and ordered the orbiter's computers to begin using the defective state vector! The orbiter, naturally, obeyed. In other words, this was ultimately operator error. The controller's action was "clearly outside the expected" procedures. (The question of whether this sort of thing is routine practice was not addressed, but I for one would suspect that the controller wouldn't have done that if he hadn't had to use manual overrides before.) Procedures have been changed as a stopgap, and various long-term fixes are being considered, including the possibility of "inhibiting" the manual override in such cases. (It is not clear whether this means making it impossible, or just requiring some degree of confirmation or authorization.) Henry Spencer at U of Toronto Zoology uunet!attcan!utzoo!henry ------------------------------ Date: Mon, 16 Apr 90 09:53:30 PDT From: "Trevor Warwick dtn: 830-4432 16-Apr-1990 1730" Subject: Software failures on Boeing 747-400? Last week, I caught the end of an item on the BBC TV news where they were talking about a software fault that had been found on British Airways' new Boeing 747-400s. Apparently, on more than one occasion, the flight control system had throttled all four engines back to idling speed while the aircraft was climbing. On each occasion, the pilot immediately intervened, and successfully rectified the situation. It was then stated that after investigation by BA and Boeing engineers, a fault was corrected in the control software. Boeing was quoted as stating that they could not absolutely guarantee that faults like this would not occur again. Can anybody supply any further details ? For the record, I'm not overly worried about the increasing computerisation of aircraft systems. With today's software technology, I don't see how you can avoid incidents like the one mentioned above, and I wouldn't be stupid enough to claim that you could build a completely reliable system. However, on balance, I think that the automated systems are an improvement, since they go some way towards eliminating the main cause of accidents, which is human error on behalf of the pilot. It may be that human error on behalf of the software engineer causes a different class of fault than you expect a pilot to make, but just because a fault is bizarre doesn't necessarily make it more dangerous. The only reason I wouldn't fly across the Atlantic in an A320 is because it doesn't have enough engines ! Trevor Warwick Telecommunications and Networks Engineering, Digital Equipment Corporation, Reading, England. warwick%marvin.enet@decwrl.dec.com ["the opinions expressed herein do not necessarily reflect the views or opinions of Digital Equipment Corporation"] ------------------------------ Date: Tue, 17 Apr 90 21:45:17 EDT From: karn@thumper.bellcore.com (Phil R. Karn) Subject: False 1099 forms Monday's Wall Street Journal carried a front page article about a new tactic being used by some hard-core tax protesters against the IRS: the filing of false 1099 forms. (IRS form 1099 is used to report miscellaneous payments to individuals; anyone who has done independent consulting should be familiar with them.) The targets of the false 1099s were generally enemies of the tax protesters: employees of the IRS, Federal judges, etc. The intent was apparently to cause grief for the victim when the IRS inquires why he or she didn't report the "income" on his or her tax return. The reported "amounts" ranged from thousands to tens of millions of dollars. Although in principle this type of attack does not require the involvement of a computer, apparently the IRS has in recent years begun to conduct large scale computerized cross-checks between 1099 reports and the tax returns of the payees to see if miscellaneous income was properly reported. A sufficient number of false 1099 forms might seriously diminish the cost-effectiveness of this technique because of the need for manual verification of the forms. (The article quoted a former IRS commissioner as saying that this type of attack had the potential of becoming a "stick of dynamite" in the tax system.) One wonders if there may be a fundamental limit on the degree to which institutions that must be open to many sources of public input can rely on a high degree of computerization to achieve cost-effectiveness, especially when the institution or its policies are unpopular. Local police and fire departments have long faced the problem of false alarms and malicious reports. They seem to deal with the problem by screening calls with human operators and by resigning themselves to wasting a certain percentage of their resources responding to crank calls. Credit reporting systems must also be open to many sources of input, and they too are often deliberately fed garbage. Closer to home, computer networks such as Internet, phone BBSes and even "Sneakernet" represent social institutions through which people can contribute software, data and so forth, with the goal of increasing the cost-effectiveness of computer systems in general. But the Internet worm and especially the PC virus problem may mean that there is a fundamental limit on the degree of sharing, because of the effort that each individual must spend to ensure that he doesn't pick up an infected program or leave his computer open to network attack. Although privacy and authentication mechanisms do exist, so far they seem to be workable only when the "community" within which the mechanisms are used to grant trust is relatively small, and threat of sanctions are effective in deterring abuses of that trust. In the case of 1099 forms, anyone buying the services of an individual US resident is apparently required to file them, and that takes in an awful lot of corporations, small businesses and even individuals. Some may be in other countries. Even if each filer of 1099 forms were given an authenticator of some kind by the IRS, there would be little to keep someone from applying for such an authenticator under a false name. (In the cases cited in the WSJ article, a number of criminal prosecutions are underway, apparently because many of the perpetrators used their own real names as the payers!) In the computer field, I'll leave it to your imagination as to how easy it is to find the author of a PC virus. Those few suspects that have been caught seem to have been extraordinarily careless in leaving evidence and witnesses around. I venture to guess that had Robert Morris developed his worm on his own computer in complete secrecy, we'd still be wondering who did it. Phil ------------------------------ Date: 16 Apr 90 14:53:00 EDT From: "zmudzinski, thomas" Subject: Re: Risks [9.080] of Daylight Savings Time One counter question, when did the railroads stop running on Standard Time? (The transcontinental railways were the main reason we went to Standard in the first place. The last time I rode a train (not recent I'll admit), they still ignored local vagaries like Daylight Saving, War, Victory, etc. Time.) /z/ ------------------------------ Date: Wed, 18 Apr 90 09:31:13 EDT From: Chuck Weinstock Subject: Re: Risks [9.080] of Daylight Savings Time The railroads went (mostly) to daylight time in the 1960's as best I can tell. I have some employee timetables that indicate that they go into effect at 12:01am (or 2:01am) Pacific Standard Time, on the Sunday of the time change, yet seem to have times in them on daylight savings time. I also looked at an old Offical Guide from the 60's and it had some railroads showing their schedules in DST and others in ST. Amtrak, at least, publishes its schedules according to local time (when the computer doesn't goof!) Side note: most railroads today don't publish times in their employee timetables (and they don't have public timetables since Amtrak). The reason for this is that with the increasing use of radios, it is typically more efficient to run all trains as "extras" and let the dispatcher sort out which train meets which train and where. The capacity of the railroad is increased. The exception is an Amtrak train, which always runs to a schedule. Chuck ------------------------------ Date: [lost, but old] From: Richard Morton via PGN Subject: Comment on UK Software Standards [RISKS-9.1 and 2] I have been following the discussion on the RISKS Forum regarding the new UK defense software standards, and it seems to me that the arguments about what technology should be in the standards are irrelevant. The standards as described by Sean Matthews (RISKS-9.1) appear to address only technical concerns (what programmers should do), and as such will never lead to higher quality systems regardless of which technology they espouse. Quality is first and foremost a management problem, not a technical problem. Long before we worry about the technology, we need to learn some basic management principles and apply them. David Parnas recognizes this when he writes: I believe that organisations such as MoD would be better advised to introduce regulations requiring the use of certain good programming techniques, requiring the use of highly qualified people, requiring systematic, formal, and detailed documentation, requiring thorough inspection, requiring thorough testing, etc. than to introduce regulations forbidding out the use of perfectly reasonable techniques. Fred Brooks tried to make the importance of management clear in The Mythical Man-Month. For example, he noted "More software projects have gone awry for lack of calendar time than for all other causes combined." There is not even a hint of a technical problem there. He explains: I believe that large programming projects suffer management problems different in kind from small ones, due to division of labor. I believe the critical need to be the preservation of the conceptual integrity of the product itself. That book is almost 15 years old, and very few people seem to have taken it to heart. Marketing concerns still dominate schedule estimates on competitive contracts. Project managers still add more people to late projects. It seems like no one is listening. Apparently, we still haven't gotten the attention of top management. Why not try something different, something so simple any CEO can understand? Henry Ford once recommended to Congress that all that was needed to solve the water pollution problem was to require people who take water out of a river to return it upstream from where they take it out. I believe that a similar simple but elegant solution to the "software problem" exists. Only one quality standard is required: Any company bidding on a high risk project (or better still, any project) must be able to demonstrate (and continue to demonstrate during the life of the contract) that the quality of their software development process is better this year than it was last year. It does not do any good to try to tell people how to solve the problem, until it is their problem. Some people call this the 2-by-4 theory of how to get someone's attention. After they have been hit right between the eyes, they will ask us for the technology we have been trying to give them for the last 20 years. Richard Morton, Institute for Defense Analyses (Opinions expressed are strictly my own.) ------------------------------ Date: Sun, 15 Apr 90 01:49:19 EDT From: unkydave@shumv1.ncsu.edu (David Bank) Subject: Automates Fast Food Recent articles by davy@instd.sri.com and webber@psych.toronto.edu reminded me of a story related to my by my girlfriend. For a time, she worked at a Hardees restaurant in North Carolina. It is my understanding that this was one of their "flagship" operations as it was geographically close to the center of the Hardee's "empire" in Rocky Mount, NC. The events here took place a number of years ago. This food store had regiesters installed that recorded all of the purchases, so that at the end of the day the manager could get printouts of all the data and make sure the till matched the sales reported and help keep inventory updated. These registers were "linked" and could "talk" to each other. At the end of business each day, the manager would go to a register (it did not matter which one), punch in a numeric code, and ALL of the registers would start printing out ALL of the purchases they recorded that day. Once this process was triggered, from ANY register, there was NO way to abort or stop it. The registers printed until they were finished, short of pulling the plug on the whole system (which would cause data loss, obviously). One day, a recently discharged assistant manager went thru the drive thru at the beginning of lunch "rush hour." When he got to the window, he made a request of the person at the window which he knew would require them to leave their station. While they were gone, he leaned in the window, typed in the aformentioned code on the register, and drove off...leaving the registers at the store paralyzed as they printed out the morning's sales at the height of the lunch rush. The front line staff was reduced to taking orders on pads and hand-calculating the cost and applicable tax, having to constantly refer to the menus behind them. Unky Dave ------------------------------ End of RISKS-FORUM Digest 9.81 ************************