26-Sep-85 13:36:28-PDT,14732;000000000000 Mail-From: NEUMANN created at 26-Sep-85 13:33:02 Date: Thu 26 Sep 85 13:33:02-PDT From: RISKS FORUM (Peter G. Neumann, Coordinator) Subject: RISKS-1.16 Sender: NEUMANN@SRI-CSLA.ARPA To: RISKS-LIST@SRI-CSLA.ARPA RISKS-LIST: RISKS-FORUM Digest Thursday, 26 Sep 1985 Volume 1 : Issue 16 FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS Peter G. Neumann, moderator Contents: Intellectual honesty and the SDI (Bill Anderson) RISKy Stuff (Mike Padlipsky) Mailer Protocol Woes (Rob Austein) Risks in Synchronizing Network Clocks (Ann Westine for Jon Postel) Re: Moral vs. Technological Progress (Joel Upchurch) Risk Contingency Planning -- Computers in Mexico (Mike McLaughlin) Summary of Groundrules: The RISKS Forum is a moderated digest. To be distributed, submissions must be relevant to the topic, objective, in good taste, and coherent. Others will be rejected. Please try to advance the discourse, rather than iterating and reiterating. There is no fine line between what is acceptable and what is not, and thus the moderator's decisions may occasionally seem arbitrary. It is our clear intent to represent a diversity of views. However, we cannot be expected to provide counterarguments when none are forthcoming. (Contributions to RISKS@SRI-CSL.ARPA, Requests to RISKS-Request@SRI-CSL.ARPA) (FTP Vol 1 : Issue n from SRI-CSL:RISKS-1.n) ---------------------------------------------------------------------- Date: 18 Sep 85 15:48 EDT From: WAnderson.wbst@Xerox.ARPA Subject: Intellectual honesty and the SDI Courtesy-of: minow%rex.DEC@decwrl.ARPA (Martin Minow, DECtalk Engineering) FROM AIList Digest Friday, 20 Sep 1985 Volume 3 : Issue 125 At the recent IJCAI at UCLA I picked up a couple of papers at the GE exhibit booth. One of these, entitled "A Tutorial on Expert Systems for Battlefield Applications," (delivered at a meeting of the Armed Forces Communications and Electronics Association last May) states that "AI systems that incorporate human expertise may be the only way" to fill the gap between availability of people and complexity of military hardware. In defense of this strategy the author states: - In contrast with humans, AI systems are good at handling the myriad details of complex situations, such as often occur in military settings. - In contrast with other computational approaches that are more formal and algorithmic, AI systems are more robust: they are designed to deal with problems exhibiting uncertainty, ambiguity, and inaccuracy. I find it appalling (and frightening) that statements like this can be presented in a technical paper to military personnel. The author (according to the references) has contributed widely to the AI field at many conferences. It's simply ludicrous to state that current AI systems are better in battlefield situations than humans. What was the last AI system that could drive a tank, carry on a conversation, and fix a broken radio whilst under enemy fire? The second comment is equally misleading. To contrast "formal and algorithmic" with "robust" seems to imply that algorithms and formal procedures are inherently not robust. On what is this claim based? (There is no reference attached to either statement.) It sounds like a recipe for unreliable software to me. How can someone write this stuff? I know, to make money. But if this is the kind of information that is presented to the military, and upon which they make decisions, then how can we expect any kind of fair assessment of the possible projects in the Strategic Computing (and Defense) Initiatives? How can this kind of misinformation be rebutted? Bill Anderson P.S. The full reference is available on request. ------------------------------ Date: 20 Sep 1985 18:33:30 EDT From: PADLIPSKY@USC-ISI.ARPA Subject: RISKy Stuff To: neumann@SRI-CSL.ARPA ... I suppose I might as well succumb to temptation and offer a couple of comments on stuffgoneby: The most striking omission, to my mind, in the SDI discussion is (unless I missed spotting it) the failure to draw the parallel to SAGE. For those who don't remember/know, the Semi-Automated Ground Environment was the grandfather of the big, "man-machine" system, certainly in DoD and most probably in the field as a whole. It was intended to allow for the interception of manned bombers. It is widely acknowledged to have spun off a lot of what became the state of our art (collective art, that is-- i.e., what I call the computer racket). Like SAGE, SDI is probably dealing with the wrong threat (since things don't have to go through the air to go boom ... and since things don't even have to go boom to rack up megadeaths). Also like SAGE, SDI might have useful spinoffs (a 20-years-younger colleague claims to be for it because it should help get him off this planet). Unfortunately, unlike SAGE it seems to possess a real potential for stampeding the presumed Bad Guys into doing something ... unfortunate. What a good thing we Men of Science know better than to reason from analogy, eh? ... muted cheers, map ------------------------------ Date: Fri, 20 Sep 1985 14:36 EDT From: Rob Austein To: mooremj@EGLIN-VAX Cc: neumann@sri-csla Subject: Mailer Protocol Woes I was actually a culprit in a similar mailer lossage earlier this week. The whole thing actually started out when I was dialed up to XX on a noisy connection. Improbable as it seems (although not quite on the order of monkeys and Shakespeare), the random line noise managed to generate the 7 character command sequence necessary to send off my entire mail file as a single message to a major mailing list. All 110 pages (=281600 bytes) worth of it. Fortunately for the network (but unfortunately for my reputation) the list happened to be the TOPS-20 maintainers' mailing list, so the message got killed off in pretty short order. I have since put in a couple of safeguards into my mail reader environment so that this particular lossage can't happen again, but since the real culprit was transmission line noise I have been kind of nervous about reading my mail over dialups ever since.... --Rob ------------------------------ Date: 25 Sep 1985 09:07:47 PDT Subject: Risks in Synchronizing Network Clocks (RFC956 Now Available) From: Ann Westine Plucked-From: Request-For-Comments-List: ; A new Request for Comments is now available from the Network Information Center in the directory at SRI-NIC.ARPA. RFC 956: Title: Algorithms for Synchronizing Network Clocks Author: D. L. Mills Mailbox: Mills@USC-ISID.ARPA Pages: 26 Characters: 68868 pathname: RFC956.TXT This RFC discussed clock synchronization algorithms for the ARPA-Internet community, and requests discussion and suggestions for improvements. The recent interest within the Internet community in determining accurate time from a set of mutually suspicious network clocks has been prompted by several occasions in which errors were found in usually reliable, accurate clock servers after thunderstorms which disrupted their power supply. To these sources of error should be added those due to malfunctioning hardware, defective software and operator mistakes, as well as random errors in the mechanism used to set and synchronize clocks. This report suggests a stochastic model and algorithms for computing a good estimator from time-offset samples measured between clocks connected via network links. Included in this report are descriptions of certain experiments which give an indication of the effectiveness of the algorithms. Distribution of this memo is unlimited. Public access files may be copied from the directory at SRI-NIC.ARPA via FTP with username ANONYMOUS and password GUEST. The normal method for distribution of RFCs is for interested parties to copy the documents from the NIC online library using FTP. Requests for special distribution should be addressed to either the author of the RFC in question or to NIC@SRI-NIC.ARPA. Unless specifically noted otherwise on the RFC itself, all RFCs are for unlimited distribution. Submissions for Requests for Comments should be sent to POSTEL@USC-ISIB.ARPA. Requests to be added to or deleted from this distribution list should be sent to NIC@SRI-NIC.ARPA. --jon. [I include this item in the RISKS Forum for a very obvious reason: one of the nastiest of all problems in distributed computer systems and networks is the synchronization problem. That so many seemingly correct algorithms have in fact been flawed is a very important consideration here. PGN] ------------------------------ Date: Wednesday, 25 Sep 1985 16:27-EDT To: risks@sri-csl.arpa Subject: Re: Moral vs. Technological Progress X-From: joel@peora.UUCP (Joel Upchurch) >Actually McCarthy's original comment presupposes that moral and >technological progress are comparable. It is that assumption that I >disagree with. Ethics and the attendant morality provide the context >within which all activity, and in particular technological progress, >exists. Morality and technology are not substitutes for one another >and moral progress is not dependent on technology nor vice versa. >There is always technological progress attendant to moral progress >just because there is always technological progress. I don't think that there is any such thing as an absolute moral standard. Morality is simply a set of customs that evolves by trial and error to improve the survival chances of the social group (family, tribe, nation whatever). Notice that this has has little to do individual survival and that a moral principle, such as patriotism, may cause the individual to get killed. Now I think it fairly easy to see that the capacity to put group survival ahead of self-interest is an important genetic trait and that tribes of people that had this trait would be more likely to survive that tribes that didn't. That is not to say that this moral capacity doesn't vary greatly from one person to the next or that even that it may not be more fully realized in one person than another because of upbringing. It is even possible that, because of some genetic error, some people may be born without a moral capacity, just like they might be born without arms or legs. The point I'm trying to make is although there will always be these survival customs we call morality, the nature of the customs is heavily dependent on the context they evolve in. Thus the morals of a herding society may be greatly different from those of an agrarian one. Even two societies in similar contexts may evolve different moral solutions to the same problems. This implies that if the context in which a society operates changes, then the morals of that society will have to change too. In recent centuries the most important changes in the social context have been caused by technology. Thus the morals appropriate to a agrarian society are not always appropriate to an industrial one or those of an industrial society to a post-industrial one. Moral progress means the evolution of survival customs more appropriate to the current context. The trouble in recent centuries has been that our ability to evolve new technology has outstripped our capacity to evolve the appropriate morality for it. There is a strong tendency to stick to the morality that one learns as a child, even if it not appropriate to the current situation. Our current problem is that we have a technology that supplies us with ICBMs and a morality that includes national patriotism. Now it is obvious to any thinking person that this is a serious dilemma. Some people argue that we should adopt a new morality, more appropriate to this technology, and indeed in the long term they are correct, although one can argue what is the most effective moral solution to this problem. The trouble is that moral solutions evolve slowly. Even today much of our morality is left over from our nomadic and agrarian heritage and has limited relevance in our modern society. In order to apply the long term solution, we must first survive in the short term, and in that short term technical band-aids, like the SDI, are appropriate. Technology put us into this situation and I don't see why we can't ask technology to assist us in the solution. To think that we can solve the problem by an overnight revolution in human nature is wishful thinking of the most dangerous sort. Joel Upchurch Perkin-Elmer Southern Development Center 2486 Sand Lake Road/ Orlando, Florida 32809/ (305)850-1031 {decvax!ucf-cs, ihnp4!pesnta, vax135!petsd}!peora!joel ------------------------------ Date: Sat, 21 Sep 85 13:25:25 edt From: mikemcl@nrl-csr (Mike McLaughlin) To: risks@sri-csl.ARPA Subject: Risk Contingency Planning -- Computers in Mexico This is not info, but a call for info. I have no idea how bad the computer bug has bit in Mexico, but the current news & TV coverage of the disaster suggest that whatever computing/networking is going on must have been affected. If any RISKS reader knows some statistics or details about computer usage in Mexico, and can give insights as to what happened, what backup and alternative modes were in place, how well they worked... etc. It would certainly help me, and perhaps many others in planning for the disaster that we hope never comes. Perhaps we could discuss just what we want to find out, and then go after the answers formally, if there are no informal contacts on the net. - Mike [This item may seem a little odd for the RISKS Forum, but when you realize that Mike's Navy job involves expecting the unexpected risks in computer and network usage, and planning what to do, it is indeed relevant. By the way, in Mexico, as usual in times of disaster, the ham-radio buffs lept in to help. Computer networks and internets also have a role to play. I'm ready with my battery-operated portable terminal. (This issue is being sent from DC.) PGN] ------------------------------ End of RISKS-FORUM Digest ************************ -------