Subject: RISKS DIGEST 17.22 REPLY-TO: risks@csl.sri.com RISKS-LIST: Risks-Forum Digest Tuesday 1 August 1995 Volume 17 : Issue 22 *************************************************************************** ****************** THIS IS THE TENTH ANNIVERSARY ISSUE. ******************* *************************************************************************** FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator ***** See last item for further information, disclaimers, etc. ***** Contents: 10th anniversary of RISKS (Peter J. Denning) "The Net" (Andrew Marc Greene) Ten years still too soon to tell (Raymond Turney) Which risks to fight first? (Raymond Turney) Where do we go from here? -- A Sermon for the Converted (Karl W. Reinsch) Limits to Software Reliability (Dick Mills) Software Development (Dave Schneider) R&D on the dependability of human-computer interfaces (Jack Goldberg and Roy Maxion) Info on RISKS (comp.risks), contributions, subscriptions, FTP, etc. ---------------------------------------------------------------------- Date: Thu, 27 Jul 95 17:37:48 EDT From: pjd@cne.gmu.edu (Peter J. Denning) Subject: 10th anniversary of RISKS >From the perspective of one of the members of the ACM Council that approved the startup of the Risks Forum, I can say that RISKS has succeeded far beyond what we dared to hope. We hoped that it would be an opportunity for people to speak up on issues about risks of computers that concerned them. We hoped that this one forum would bring some organization to the chaotic discussions about risks and would allow for reasonable and constructive conclusions to emerge. At the same time we were somewhat fearful that the forum might be taken over by a small set of people who monopolized the discussions (as was common on BBs), or that the interest in it would die out in a couple of years. But look at what has been accomplished: 1. RISKS is vibrant, alive, and well after 10 years. 2. RISKS is taken seriously by many people as a reasonable ongoing discussion of risks. It has made ACM a key player in this aspect of Internet. 3. RISKS has wide participation. 4. RISKS has very wide distribution. 5. RISKS is a good observation post for what is on people's minds about computer-related risks. 6. RISKS has enabled the documentation of a large number of mishaps, malfunctions, and catastrophes, giving designers a solid base of information about what can go wrong with their systems. This documentation has been summarized and analyzed in Peter's ACM Press book (Computer-Related Risks, Addison-Wesley), and all back issues of RISKS are available to anyone at any time on the Web. All this is in no small measure due to the gentle but firm hand of Peter Neumann, our moderator, who has brought a clear vision of what a constructive, serious on-line discussion about an important topic can be. And he has made it real. Peter Denning [PJD is the "Peter Sellers" of the ACM: at one time or another, he's played all the major roles. He has also been a regular contributor to RISKS, dating back to the very first issue, 1 August 1985. PJD, Many thanks for your kind and thoughtful words. PGN] ------------------------------ Date: Tue, 1 Aug 1995 10:07:02 -0400 From: "Andrew Marc Greene" Subject: "The Net" I saw "The Net" last night. For those of you who don't know about this new movie, its basic plot is that a "computer analyst" named Angela Bennett runs into the software equivalent of the "Sneakers" chip -- it will break into anyplace. It's an ok movie, but worthy of Risks' attention because it is a popularization of the sort of worries that we discuss all the time. And it does a good job of that, I think. It's realistic but not too alarmist. The resolution is a bit contrived, but aside from that the technology is believable (and the IP equivalent of a 555-xxxx number is xx.xxx.345.xxx). But I kept wondering why, since this is set in California, Angela didn't try to call PGN for help.... :-) - Andrew Greene [An article in this morning's SanFranChron indicates that the writers were inspired by a Japanese case of purloined identity (which has not yet appeared in RISKS). Evidently, their advice came from elsewhere, although RISKS readers will recall Terry Dean Rogan's misadventures as a primal case of being spoofed. Also, in the movie, Angela sought almost no technohelp from anyone, including law enforcement. But as I watched the film, I was ready, just in case she called. BTW, the folks who brought you WarGames and Sneakers are now working on a new film. Stay tuned. PGN] ------------------------------ Date: Fri, 23 Jun 1995 13:11 -0700 (PDT) From: Raymond.Turney@ncal.kaiperm.org Subject: Ten years still too soon to tell The ten years that comp.risks has been in existence is not long enough to get a good grip on trends involving risks in computers and related systems. To see why, consider the history of nineteenth-century railroading. This is another technology the use of which grew explosively and that evolved rapidly. By 1870 or so, which is about the same amount of time after railroading was first developed that 1995 is after digital computers started to be used, it was very evident that there were risks in railroading. One of the most prominent of them was the combination of derailments, wooden passenger car bodies, and oil lamps. Basically, in a derailment {or other wreck} the passengers would be pinned inside the car by blockage of exits. The oil lamps which had been dislodged would set the body of the car on fire and the passengers trapped inside would be burned alive. In 1870 this was a serious problem that, with the steadily increasing number of rail passengers and passenger- miles, looked like it could only get worse. By 1910 this problem had been basically solved. The solution was steel passenger car bodies, and electric lighting. An optimist can take comfort from the fact that this risk is now gone. So far gone that only railfans with an interest in train wrecks {like my father who bought a book on train wrecks, and I who read it} are even likely to remember it. Further, the solution was in large part technical, and did not involve massive changes in people or social institutions. As a final point of encouragement the widespread adoption of electric lighting aboard railroad passenger trains was not an evident possibility in 1870. So sometimes new technology does save you. A pessimist could point out that forty years is a long time to wait for a solution to a problem as obvious as this one. He could also point out that my selected example of a railroading risk was carefully chosen to expose a problem which had been forgotten. If I had focused not on the narrow and specific risk of passengers being burned alive in derailed cars, but had instead focused on the more general problem of derailments, I would be discussing a problem that is still with us. Indeed, derailments could be offered as a problem which has been aggravated by the widespread adoption of a new technology. The technology referred to of course is the much more widespread use of toxic chemicals carried in tank cars. Likewise, collisions between trains, a risk evident in 1840, are still happening and being discussed in comp.risks as of June 1995. Believers in upgrading and improving systems could use my example risk as support for their position. The adoption of steel passenger-car bodies depended on the development of more powerful locomotives, which in turn required heavier rail and a better quality of roadbed to operate. Thus the solution of an apparently narrow and specific problem turned out to hinge on substantial upgrade of the system as a whole {which was actually done for other reasons}. Supporters of system upgrade have a proposal to mitigate if not solve the problem of derailments, too: track maintenance. A partial explanation for the rise in problem derailments {actually, all derailments are a problem but not all derailments make CNN, which is what I am referring to} is the reduction of maintenance of way expenditures to the barest possible minimum by many railroad companies. The results of this will be predictable to most engineers. Cynics, of course, can point to this risk as evidence that people worry about the wrong things. Another effect of railroading, not as obvious in 1870 as it is now, is to improve the effectiveness of mass mobilization in wartime. This turned out to be a precondition of World Wars I & II, far more destructive events than all the train wrecks of the nineteenth century combined. But while train wrecks made headlines in the nineteenth century, the impact of railroading on military mobilization did not. The point of all this is that there has been a historical parallel to the introduction of new risks with the mass adoption of computer technology, namely the introduction of new risks with the mass adoption of railroad technology. Many of the positions now being argued with regard to computer related risk have analogues that could be argued with regard to railroad related risk. Since it is not clear even now which position would have been generally correct in 1870 with regard to railroad related risk, I do not think we have the data to reach accurate conclusions about computer related risk. The railroad analogy does reinforce some recurrent themes in comp.risks, though. If you want to eliminate a risk, your best bet is to analyze the system as a whole, determine the preconditions of the risk, and change the system to remove them. After all, the proximate cause of most of the train wrecks in which passengers were burned alive was operator error. The elimination of the risk involved going to steel passenger car bodies, though, which is not obviously related to the proximate cause of any particular accident. Measures necessary to reduce or eliminate a risk are often expensive. Replacing the entire US stock of wooden passenger cars with steel ones was not cheap, and the railroad companies were not happy about doing it. Social reforms that reduce strain on people can also increase their reliability, and thus the safety of a system. The ten hour day on the railroads {1916 if I remember right}, was an important reform that made things safer for passengers. Perhaps a ten hour day for programmers ... {just kidding, just kidding}. Finally, extremes of hope and despair should be avoided. If the railroading analogy is any guide, we shall end up in neither perdition nor paradise. Particular risks will be eliminated, risk as a whole will not, and there will still be nice people unhappy about risk in the year 2100. Raymond Turney ------------------------------ Date: Mon, 10 Jul 1995 12:47 -0700 (PDT) From: Raymond.Turney@ncal.kaiperm.org Subject: Which risks to fight first? It has been argued elsewhere that it is too soon to tell which risks are the most significant, and how best to deal with them. Unfortunately, decisions about what risks to fight, how, will have to be made before the verdict of history is in. It would be better if these decisions were made after some consideration of the information available to us, rather than by throwing darts in the dark {though reports are that as regards the stock market, darts are not as bad a means of decision making as one might at first suppose}. My suggestion is that those who are concerned about computer-related risks should focus their attention on the risks to privacy resulting from massive data collection and analysis using computer systems. A secondary problem area, coming up fast, are the unknown psychological effects of both the Internet and the increasing availability of virtual reality. There are a number of reasons for focusing on these risks, among the many risks that are out there. The first and strongest reason is that these risks arise from the intentional use of computers working more or less as designed. Thus there is no natural counterforce working to contain these risks. By contrast, one can consider a widely discussed {at least in RISKS}, risk: fly-by-wire aircraft. Whatever the problems with the design of the A320, or possibly the 777, there are a large number of people working for the manufacturers, airlines and regulators, who are strongly against aircraft crashes. Aircraft crashes are thoroughly investigated and their causes widely reported in magazines devoted to flight, and the air safety community is well developed. Computer related risks as they affect aircraft will be dealt with by the air safety community in the normal course of business. When one also considers that flight is one of the safest means of travel currently available, the conclusion that the total additional risk to our lives and values resulting from the use of computers ... given the continued existence of a strong aviation safety community ... is not major. Similarly, the problems related to the use of computers in medical equipment are in effect a subset of the problems raised by iatrogenic disease in general. While the impression I get is that the medical safety community is underdeveloped relative to the air safety community {i.e. there are fewer and less powerful people interested in medical safety, and that the medical equivalent of the FAA is much weaker}, it is very doubtful that medical safety would be significantly increased by focusing more of the existing medical safety effort on computer related aspects of the overall risk of iatrogenic disease. Thus, it is not clear that computer people should demand that medical safety effort be reallocated to deal with computer problems would be a welcome thing. After all, in arguing that computer related risks make only a minor contribution to the overall risk, I am merely assuming that the small number of reports in risks of computer related death or injury are a realistic sample of the problem. It is possible to go through a number of the other risks discussed in comp.risks and make similar arguments. While a knowledge of computer related risks in these areas is valuable, and should be made available to the relevant safety communities, these risks are already being dealt with. It may be that we need to strengthen the medical safety community, but we probably do not want the computer safety community to replace it. Look at the whole system, not merely the computer related portion of the risk. By contrast, the risks to privacy posed by modern computer technology are nobody else's responsibility. And since they arise from the use of the machines as designed, they will not be reduced by the normal efforts of engineers to make things work better. They will be increased by the normal efforts of engineers to make things work better. The reason for the interest in the effects of the Internet and VR technology is simply that they are unknown. People are reporting problems with "Internet addiction", and as an old gamer I can see where there might be a problem with people preferring VR flight simulators to their real lives. Not being a member of the media, I will not suggest panic. Some psychological studies, to see if this risk is real and how big it is, do seem appropriate though. In short, my recommendation is that efforts to increase the use of computers in ways which will invade and reduce personal privacy be resisted with all of the power available to the Computer Science community. The extent of computer related contribution to medical risks should be studied. Psychological studies of the effects of the Internet, and VR, should be supported if the methodology is reasonably sound and the intent of the authors is not sensationalistic. What has been learned about computer related risk should be communicated to members of the appropriate safety communities. Raymond Turney P.S. I suspect a lot of readers will question my assumption that sound methodology exists in psychological studies. From a scholarly perspective, I might even agree. But this is about policy, not scholarship, and while proof would be nice, clues will do. ------------------------------ Date: Sun, 23 Jul 1995 02:26:14 -0400 (EDT) From: "Karl W. Reinsch" Subject: Where do we go from here? -- A Sermon for the Converted What is the problem that causes computer-related risks today? Is it the technology? The people? Those of us who read RISKS on a regular basis understand that it is people expecting too much from technology. Why do people expect too much from technology? Sometimes it is lack of understanding. A large part of the problem is that the computer profession has oversold its product. Computers, people are told, can do anything. And if they can't do it now, they will tomorrow. Computers are perfect and full-proof. Wrong. Things are better today. In our profession, the awareness of the limitations of computers is at an all-time high. There exist forums such as the Risks Forum, and Peter Neumann's own columns in CACM. A look into bookstores reveals many recent books such as the second edition of Theodore Roszak's "The Cult Of Information", Lauren Wiener's "Digital Woes", Clifford Stoll's "Silicon Snake Oil", and Peter Neumann's "Computer-Related Risks". And many that are more recent. At the university I attended, the reading for my course-work included Frederick Brooks' "The Mythical Man-Month", the ACM TWA case study, Donald Norman's "The Design of Everyday Things", and the ACM Code of Ethics. Awareness of the computer's limitations and risks is certainly being passed on to the next generation of computer professionals. So, why do the risks still exist? It goes back to the computer having been oversold in the past. People have been told that the computer can do things that it can not or should not do. The government of the United States once attempted to develop the Strategic Defense Initiative (SDI or "Star Wars"). It took a group of computer professionals to come forward and point out how impossible such a project is. SDI is just one of many things that people assume computers can do. The problem is not only in government. The problem is everywhere. Corporate leaders and futurists such as Alvin Toffler tell the public what the computer can, can not, and will do. United States Congressman Newt Gingrich often cites Alvin Toffler's visions of technology and the future. It is my thinking that Congressman Gingrich should balance his reading diet with the works of Neumann and Roszak. So what do we, as computer professionals, do now? Our primary problem is that we have been preaching to the converted. The computer professional is the primary reader of RISKS and related subject matter. We need to share the message of RISKS. We need to preach to the unconverted. As Justin Wells suggested in RISKS-17.19, we need to push for a risks segment in our evening news. We need television programs and newspaper columns. We need to get the message out to everyone. The RISKS tidal wave is only beginning to come in. It needs to wash over everything. Happy Birthday, RISKS! Here's to 10 years and hopefully many more. Karl Reinsch, kreinsch@radix.net ------------------------------ Date: Thu, 20 Jul 1995 15:01:32 -0400 From: rj.mills@pti-us.com (Dick Mills) Subject: Limits to Software Reliability For decades I've heard that software can't ever be as safe or reliable as hardware. That makes me feel uneasy, because it rings untrue. Instinct tells me it has nothing to do with software per se, but merely in how we structure it. The following is a little thought exercise. Risks readers can amuse themselves picking it apart. Complex electric circuits are comprised of networks of connected components. Each component (like a resistor, or a transistor) is very simple. Suppose we wrote a simulation of one of these simple components, say a resistor. The simulator would have a CPU, memory, and A/D and D/A converters for each of the devices port. It would also have an battery, if we're going to avoid external power connections. Now package the simulator in epoxy with just two external ports, just as if it were a real resistor. Within a limited domain, the simulated resistor ought to be able to pass a Turing test in that one could not distinguish it from a real resistor. Given bins of identical simulators for the needed basic components, we would have the raw materials to build circuit boards of arbitrary complexity. The question is, how do the hardware and simulator based versions of the same board inherently differ in complexity and reliability? Granted, a simulator of a resistor is more complex than an actual resistor. That gives hardware a slight edge from the start. However, [and this is the real point], the simulator - hardware complexity gap would *not* grow proportional to the number of components on the circuit board. All instances of the resistor simulator are alike. The complexities of the circuit interconnections are identical for both implementations. My conclusion is that there is no *a priori* reason why software based systems can not have the same risks as hardware based implementations of the identical external requirements and identical design (i.e. circuit level design). Is that correct? Dick Mills +1(518)395-5154 http://www.albany.net/~dmills ------------------------------ Date: Fri, 21 Jul 95 14:36:00 PDT From: Subject: Software Development While thinking about trends shown in the history of the Risks Digest, I also came across an article in _Software Development_, Vol 3 No. 7, July 1995 (a Miller-Freeman publication). I think it has some very good comments in it. The article is "Project-Level Design Archetypes", by David Bond. The sub-head is, "Choosing the 'best practices' touse for your software development project depends on the type of project you're dealing with." He goes on to say, "If you are tuned into the debate over software development practices, you are aware of how polarized this debate can be. Each side trumpets its view of what development is all about and largely ignores what the others have to say. Each point of view is based on real experiences. Each side mistakenly assumes all software development projects are similar enough that one set of strategies works for all." He goes on to discuss Constrained Software (and SEI Maturity Levels), Internal Client Software, Veritcal Market Software, and Mass Market Software. In each section, he lists the critical success factors, in approximate order of importance. For instance, under Mass Market Software, the lists is "Marketing, Timeliness, Features, Cost, Ease of Use". Constrained software, as in embedded software or government contracts, is "Adherence to the contract, including schedule and cost, Quality, Maintainability and extendibility". He further describes the impact of these different lists on choosing a "best practice" strategy (which is why SEI Maturity Levels are under Constrained Software). Since many of the "failure to deliver" anecdotes in Risks relate to Information Systems, the key items in this article are 1) "If you look at the tradition of systems engineering, requirements changes are considered undesirable." 2) "On the other hand, most information systems departments have learned...that they must accommodate change." 3) "One tremendous advantage of systems engineering is that it appears to scale up better. Various studies have indicated a high failure rate of large information systems projects. This is probably because large projects often involve issues beyond just software. Systems engineering is aimed at managing precisely these kinds of projects." I strongly recommend the entire article. /dps Dave Schneider, Emulex Corp. P.S. I have been reading a lot of back issues of the past several months, because you just can't keep up with enough new issues for my appetite. So I have noticed some trends, as with the IS systems mentioned. Another trend is that there is often a lot of discussion of anticipated impacts of regulatory changes before they occur, and rarely any followup after they occur -- or if they did occur. But keep up the good work. I often find pithy things in the digests to share with my coworkers. /dps ------------------------------ Date: Mon, 31 Jul 95 10:50:54 -0700 From: Jack Goldberg Subject: R&D on the dependability of human-computer interfaces Readers of Risks need no awareness-raising about dependability problems in human-computer interaction. There are too many reports of transportation, power or weapons systems in which operator misunderstanding or abuse of the control interface had tragic consequences. Serious, if not tragic, reports of economic losses or inefficiencies can be traced to ambiguities or opacities in the information supplied to users or in the rules governing system operation. Some reports have attributed failures to the human interface that are really due to system designs that place impossible demands on operators. The maintenance interface is also a source of failures, from small to gigantic. Use of computers by groups introduces problems of human communication to the human-computer interface. Design of the human-computer interface for dependability has been attended to seriously in numerous applications; for example, several Risks reports have described very diligent attention to human interface issues by aircraft designers, and there are other serious industrial efforts. Most of these are concerned with highly critical applications, and it is not clear that results from one application domain can be used in others. It would seem appropriate to have some general results about dependability and the human interface that can be applied across different domains and over different degrees of criticality. What are the fundamental issues? How should we characterize problems at the human interface? How can we measure and observe interface risks and errors? How should interfaces be designed for low risk and for tolerating interface faults? How can systems be designed to prevent unreasonable requirements at the interface? Answers to these questions are not evident in current professional communications. How should this study start? What are the right questions? What good results exist that can be generalized? What would a good testbed contain? What are the right forums for communicating results? Is anyone working on these issues? Despite many years of research in user interfaces, their dependability doesn't seem to be improving. (Yes, we are aware of all the research done in the CHI community.) This leaves us with the somewhat rhetorical question: how would someone demonstrate that an interface could be depended upon for a certain mission-critical application (such as a display in a hospital operating room)? Jack Goldberg Roy Maxion [Please reply to directly to Jack, cc: Roy. Jack will cull the responses for RISKS. One of the key cases that might motivate this effort is the Vincennes-Aegis shootdown of the Iranian Airbus. PGN] ------------------------------ Date: 24 March 1995 (LAST-MODIFIED) From: RISKS-request@csl.sri.com Subject: Info on RISKS (comp.risks), contributions, subscriptions, FTP, etc. The RISKS Forum is a moderated digest. Its USENET equivalent is comp.risks. Undigestifiers are available throughout the Internet, but not from RISKS. SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) on your system, if possible and convenient for you. BITNET folks may use a LISTSERV (e.g., LISTSERV@UGA): SUBSCRIBE RISKS or UNSUBSCRIBE RISKS. U.S. users on .mil or .gov domains should contact (Dennis Rears ). UK subscribers please contact . Local redistribution services are provided at many other sites as well. Check FIRST with your local system or netnews wizards. If that does not work, THEN please send requests to (which is not yet automated). SUBJECT: SUBSCRIBE or UNSUBSCRIBE; text line (UN)SUBscribe RISKS [address to which RISKS is sent] CONTRIBUTIONS: to risks@csl.sri.com, with appropriate, substantive Subject: line, otherwise they may be ignored. Must be relevant, sound, in good taste, objective, cogent, coherent, concise, and nonrepetitious. Diversity is welcome, but not personal attacks. PLEASE DO NOT INCLUDE ENTIRE PREVIOUS MESSAGES in responses to them. Contributions will not be ACKed; the load is too great. **PLEASE** include your name & legitimate Internet FROM: address, especially from .UUCP and .BITNET folks. Anonymized mail is not accepted. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. Relevant contributions may appear in the RISKS section of regular issues of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise. All other reuses of RISKS material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using RISKS material should obtain permission from the contributors. RISKS can also be read on the web at URL http://catless.ncl.ac.uk/Risks Individual issues can be accessed using a URL of the form http://catless.ncl.ac.uk/Risks/VL.IS.html (Please report any format errors to Lindsay.Marshall@newcastle.ac.uk) RISKS ARCHIVES: "ftp unix.sri.comlogin anonymous[YourNetAddress] cd risks or cwd risks, depending on your particular FTP. Issue J of volume 17 is in that directory: "get risks-17.J". For issues of earlier volumes, "get I/risks-I.J" (where I=1 to 16, J always TWO digits) for Vol I Issue j. Vol I summaries in J=00, in both main directory and I subdirectory; "bye" I and J are dummy variables here. REMEMBER, Unix is case sensitive; file names are lower-case only. =CarriageReturn; UNIX.SRI.COM = [128.18.30.66]; FTPs may differ; Unix prompts for username and password. Also ftp bitftp@pucc.Princeton.EDU. WAIS repository exists at server.wais.com [192.216.46.98], with DB=RISK (E-mail info@wais.com for info) or visit the web wais URL http://www.wais.com/ . Management Analytics Searcher Services (1st item) under http://all.net:8080/ also contains RISKS search services, courtesy of Fred Cohen. Use wisely. ------------------------------ End of RISKS-FORUM Digest 17.22 ************************