19-Nov-85 08:01:30-PST,16960;000000000001 Mail-From: NEUMANN created at 19-Nov-85 07:59:48 Date: Tue 19 Nov 85 07:59:48-PST From: RISKS FORUM (Peter G. Neumann, Coordinator) Subject: RISKS-1.23 Sender: NEUMANN@SRI-CSL.ARPA To: RISKS-LIST@SRI-CSL.ARPA RISKS-LIST: RISKS-FORUM Digest Tuesday, 19 Nov 1985 Volume 1 : Issue 23 FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS Peter G. Neumann, moderator Contents: Expecting the unexpected (Peter G. Neumann) Safety Group Activities in the U.S. (Nancy Leveson) Automobile computer control systems susceptible to interference (Bennett Smith) Irresponsible computer "game"; BBS Legislation (Ted Shapin) SDI Debate at MIT (John L. Mills) Summary of Groundrules: The RISKS Forum is a moderated digest. To be distributed, submissions should be relevant to the topic, technically sound, objective, in good taste, and coherent. Others will be rejected. Diversity of viewpoints is welcome. Please try to avoid repetition of earlier discussions. Sorry for the hiatus. Circumstances were beyond my control. Thanks to Frieder von Henke for keeping it from being longer. PGN (Contributions to RISKS@SRI-CSL.ARPA, Requests to RISKS-Request@SRI-CSL.ARPA) (FTP Vol 1 : Issue n from SRI-CSL:RISKS-1.n) ---------------------------------------------------------------------- Date: Mon 18 Nov 85 12:57:03-PST From: Peter G. Neumann Subject: Expecting the unexpected To: RISKS@SRI-CSL.ARPA We have been talking about risks to the public in the use of computer systems in this forum since 1 August, and trying to keep the discussions mostly technical -- but not political or polemical. On the other hand, I have noted on various occasions that it is sometimes hard to exclude nontechnical arguments. This message may initially seem off the mark, but I think it has some subliminal relevance that might help put into perspective the importance of the underlying assumptions that we make and the need to anticipate essentially all possible unusual circumstances. On 13 October 1985, in Florence, Italy, at about 10:05 PM, European Standard Time, my apparently very healthy 19-year-old son Chris -- never having had a medical problem in his life -- suddenly had his heart stop and died within moments. Resuscitation attempts failed. The autopsy found no discernible apparent cause of death. A neurophysiologist friend who joined me in Florence suggested ventricular fibrillation as the most likely actual cause. The heart muscles receive signals from distributed sources via independent paths, and normally all of the signals arrive sufficiently synchronously to trigger heart contraction. Under fibrillation, the signals arrive incoherently, and cannot be integrated sufficiently to trigger contraction. So, why do I mention this in a RISKS Forum? Well, even in an apparently completely healthy person there exists some risk of spontaneous malfunction. It seems to me that in making arguments about how hardware and software will operate correctly or will continue to operate correctly if they once did, we make all sorts of implicit underlying assumptions that may be true most of the time, but that may indeed break down -- particularly in times of stress. The nastiest problems of all seem to involve unanticipated conditions in timing and sequencing, in both synchronous and ansynchronous systems, and particularly in distributed systems. We have seen various problems in the past -- the ARPANET collapse (due to an accidentally propagated virus, after years of successful operation) and the first shuttle launch immediately come to mind as specific well-documented examples. In some cases there is also signal inference -- as in the pacemaker problems (see Nancy Leveson's message in RISKS-1.22). I think that in our lives as in computer systems, we tend to make unjustified or oversimplified assumptions. In life, this makes it possible for us to go on living without inordinate worrying. However, in computer systems, greater concern is often warranted. So, my point in introducing this message into the RISKS Forum is to urge us to try to make our assumptions both more explicit and more realistic. Basing a system design on assumptions that are almost but not quite always true may seem like a close approximation, but may imply the presence of enormous unanticipated risks. In systems such as those involved in Strategic Defense, for example, every one of the potentially critical assumptions must be sufficiently correct. Peter G. Neumann (back again on the net) ------------------------------ Date: 09 Nov 85 13:04:51 PST (Sat) From: Nancy Leveson To: risks@sri-csl.arpa Subject: Safety Group Activities in the U.S. Udo wrote about the EWICS TC7 in a previous RISKS forum and said that such a group died through lack of interest in the U.S. Actually, there is a similar group which has been active in the U.S. for about three years. It is called the Software System Safety Working Group and was started by the Air Force although it is now a tri-service group. Although sponsored by the DoD, it is not limited to military applications and has participants from other branches of the government and industry. The latest meeting was held in conjunction with a conference on computers in medicine. Meetings are held approximately twice a year and usually have from 50-200 participants. One of the products of the group is a Software Safety Handbook which is meant to accompany the recent MIL-STD-882b (System Safety) update. The main purpose of the group has been to meet and discuss new techniques, share experiences, exchange ideas, etc. There is tentatively a meeting planned for January in Washington D.C. Anybody interested in the group should contact me (nancy@uci.edu) or Al Friend (friend@nrl-css) who is with the Navy and is currently chairing the group. A future plan is to have an on-line safety database which will reside at the SRI NIC. Other activities in which I have been asked to participate and which might be of interest to readers of this forum are a conference on safety which will be held in Washington D.C. next July and a workshop on safety and security sponsored by the Center for Software Reliability in England next September. I am also considering organizing a workshop in California on safety which would be held right before the next International Conference on Software Engineering in Spring 1987. Anyone interested in more information on any of these activities can again contact me and I will direct you to the right people. Nancy Leveson University of California, Irvine ------------------------------ Date: Wed, 23 Oct 85 11:14:29 -0100 From: ircam!bks@seismo.CSS.GOV (Bennett Smith) Message-Id: <8510231014.AA02863@ircam.UUCP> To: NEUMANN@SRI-CSL.ARPA Subject: Automobile computer control systems susceptible to interference By chance I saw an article in an issue of the "Journal of Environmental Engineers" (published in England, date of issue about 10 months ago, I believe) about the sensitivity of a microprocessor-controlled automobile control system to external electromagnetic radiation. As I recall, a CB transmitter near the car could, at the right frequency, make the engine slow down or speed up. Perhaps this article would interest some of your contributors. Bennett Smith IRCAM 31, Rue Saint Merri 75004 Paris, France {seismo,philabs,decvax}!mcvax!ircam ------------------------------ Date: Mon 18 Nov 85 11:54:52-PST From: Ted Shapin Subject: Irresponsible computer "game" To: risks@SRI-CSL.ARPA, human-nets@RED.RUTGERS.EDU Phone: (714)961-3393; Mail:Beckman Instruments, Inc. Mail-addr: 2500 Harbor Blvd., X-11, Fullerton CA 92634 From a Toys R Us ad in the L.A. Times, 11/17/85: Activision HACKER Makes you feel like you've unlocked someone else's computer system! For C-64, C-128. $24.97. [And on the package:] TEMPTATION HACKER To stumble into somebody else's computer system. To be someplace you're really not supposed to be. And to get the strange feeling that it really does matter. "LOGON PLEASE" is all you get to start with. That's it. From there, it's up to you. If you're clever enough and smart enough, you could discover a world you've never before experienced on your computer. Very temptimg. - - - This "product" is socially irresponsible! It leads young people to think breaking into unknown systems is OK. The "world" they discover may be the world of the penal system! Ted Shapin ------------------------------ Date: Thu 14 Nov 85 10:34:50-PST From: Ted Shapin Subject: BBS Legislation To: human-nets@RED.RUTGERS.EDU, risks@SRI-CSL.ARPA I have remailed several messages on pending BBS legislation to INFO-LAW@sri-CSL. One is a draft of a bill by Senator Leahy's aid Podesta which is a good bill. People interested in preserving open communcation via BBS's may wish to read these items. Ted Shapin. ------------------------------ From: Moderator Date: 29 Oct 85 09:33-EST Subject: Arms-Discussion Digest V5 #8 [EXCERPT: SDI Debate] To: ARMS-D%MIT-MC.ARPA@MIT-XX.ARPA Reply-To: ARMS-D%MIT-MC.ARPA@MIT-XX.ARPA EXCERPTED FROM Arms-Discussion Digest Tuesday, October 29, 1985 9:33AM Volume 5, Issue 8 [There is sufficient NONOVERLAP in our readerships to warrant reproducing this. Apologies to those who have read it already. PGN] Date: Mon, 28 Oct 85 10:58 EST From: Mills@CISL-SERVICE-MULTICS.ARPA Subject: SDI Debate This is a summary of my impressions of the panel discussion/debate entitile "Star Wars: Can the Computing Requirements be Met?" This took place on Monday October 21 at MIT. The panelists where Danny Cohen, David L Parnas, Charles L Seitz, and Joseph Weizenbaum. The moderator was Michael L Dertouzos. I was basically disappointed in this panel discussion. I was hoping to hear a good counter to the the arguments Dr Parnas had put forth in his papers. Dr Cohen started what looked like an organized attack on Dr Parnas' "Octet", refering to the series of eight papers Dr Parnas presented his arguments in. Dr Cohen correctly dismissed the eighth paper,"Is SDIO An Efficient Way To Fund Worthwhile Research", as being outside the bounds of the current discussion. Unfortunately Dr Cohen only further discussed one of the other papers. The other six where dealt with with some minor hand waving. I have to admit I don't remember which paper Dr Cohen "went into detail" on. This is because the detail amounted to a one slide outline of the major six points of this paper. This slide was up for no more than one minute with some more hand waving that none of these points were true. Back to the side claiming the software is not feasable, Dr Weizenbaum didn't realy add much of anything to Dr Parnas' arguments. He thought that Dr Parnas had done a wonderful job and there wasn't much he could add. He gracefully didn't take up much time saying this either. Dr Parnas basically presented the material in his papers. He added the new point that even if we build this thing and it "tested OK", we could never realy trust it and would be forced not to rely on it. Charles Seitz made no attempt to directly attack Dr Parnas' argument. He focused his presentation on a simplistic hierarchal structure the software for SDI could take. Unfortuanately this looked like a highly centralized form of controlling all the weapons and sensors resulting in a high degree of complexity and size. Both Dr Cohen and Seitz hit upon the point that the software for SDI is not necessarily as large and complex as some people might think. They claimed that it could be built of smaller fairly independant parts. To me this appeared contradictory to Dr Seitz's hierarchal control structure. It did come through that If you had enough totally independant platforms shotting things down, you stood a good change of hitting most the targets. It is also clear that you would need a very high level of overkill to make this work since the other platforms don't know who else is shotting at what. Back to Dr Parnas' points, I did get the feeling that there is some general agreement that there is a limit to the scale and complexity our software engineering can handle. Dr Parnas furthered this point by saying large advances in the mathematics of discreet functions are going to be a major stumbling block in the furthering software engineering. He doesn't expect these large advances on the grounds that if you simplify the equations to much you are loosing information. A discreet function can only represent so many bits. I may not have this argument exactly right. He also went thru his standing arguments against AI or automatic programing helping very much. [ I think the argument is that we need concise, manipulable discrete functions modelling software in order to achieve what other fields of engineering can do with concise, manipulable continuous functions. However, such concise representations may not be possible due to information-theoretic constraints on the number of bits that can be represented by a certain number of symbols. --MDAY@MIT-XX ] [I didn't get quite this impression, though I agree with it. Rather, I thought Parnas was saying that the problem was in the fact that with software that is fundamentally digital, there is no such thing as a continuous function, and that therefore the usual engineering assumption valid in most of the world that small changes in input or in correctness necessarily mean small changes in output or result simply isn't valid in the software engineering world. Until it is possible to analyze software in terms of approximately correct functions, graceful software degradation (in terms of an approximately correct program always doing approximately correct things) is not really possible. -- LIN@MIT-MC] Both sides came up with a number of interesting and colorful analogies. The most relavent is the Space Shuttle. Dr Cohen claims that the Space Shuttle works. This is obviously true in some sense. However, it was also pointed out that there have been times when the software on the shuttle has not worked within seconds of launch. It seems that it would be impractical to ask the Soviets to wait 15 minutes while we reboot the system. [Indeed, Seitz conceded that under certain circumstances plausible in the context of a nuclear missile attack, it might be necessary to re-boot the system. He then proceeed to ignore the consequences of that; he did not even say that there are ways to eliminate the need for re-booting. -- LIN@MIT-MC] In summary it seems that there are very real limits on what our software engineering can handle reliably. We are actually not that far from those limits in some of our current efforts. If SDI is to work its architecture must be dictated by what is doable by the software. It is unclear that SDI is feasably from a material cost point of view if the platforms are small and independant enough to be reliable from the software standpoint. In closing I would like to say that I don't think either side did a particularly good job sticking to just the software feasibility issue. One other interesting thing happened. Dr Parnas claimed to have asked some person with authority over SDIO whether "No, we can't do this" was an acceptable answer. He did this for the first time at this debate because he did not want to say this behind this person's back. Unfortunately, I don't remember this other person's name, but he was in the audience. Dr Parnas claims that the answer was, "No is not an accepatble answer" and challenged the other person to deny this. The other person promptly stood up and did exactly that. [If you mean that it was political, that's certainly true. But politics is really the determinant of the software specifications at the top level. That is how it should be, and people who want to ignore that are ignoring the most important part of the problem. However, in other instances, the Administration has noted that the SDI is central to future US defense policy. In addition, it has never specified what evidence it would consider adequate or sufficient to turn off the SDI. -- LIN@MIT-MC] ------------------------------ End of RISKS-FORUM Digest ************************ -------