Subject: RISKS DIGEST 18.36 RISKS-LIST: Risks-Forum Digest Weds 21 August 1996 Volume 18 : Issue 36 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator ***** See last item for further information, disclaimers, caveats, etc. ***** Contents: Internet Explorer Security Problem (Ed Felten) Computer Testing of Nuclear Weapons (Frank C. Ferguson) Swiss address risks of holding referenda by Internet (PGN) Risks of remote-controlled fireplaces (Jeffrey Mattox) Re: Escaping software upgrade hell (Vladimir Z. Nuri) Re: London Train Crash (Roger Hird, Clive D.W. Feather, Martin Poole) "Authentication Systems for Secure Networks" by Oppliger (Rob Slade) Abridged info on RISKS (comp.risks) ---------------------------------------------------------------------- Date: Wed, 21 Aug 1996 13:12:59 -0400 From: felten@CS.Princeton.EDU (Ed Felten) Subject: Internet Explorer Security Problem We have discovered a security flaw in the current version (3.0) of Microsoft's Internet Explorer browser running under Windows 95. An attacker could exploit the flaw to run any DOS command on the machine of an Explorer user who visits the attacker's page. For example, the attacker could read, modify, or delete the victim's files, or insert a virus or backdoor entrance into the victim's machine. We have verified our discovery by creating a Web page that deletes a file on the machine of any Explorer user who visits the page. The core of the attack is a technique for delivering a document to the victim's browser while bypassing the security checks that would normally be applied to the document. If the document is, for example, a Microsoft Word template, it could contain a macro that executes any DOS command. Normally, before Explorer downloads a dangerous file like a Word document, it displays a dialog box warning that the file might contain a virus or other dangerous content, and asking the user whether to abort the download or to proceed with the download anyway. This gives the user a chance to avoid the risk of a malicious document. However, our technique allows an attacker to deliver a document without triggering the dialog box. Microsoft has been notified and they are working on fixing the problem. Until a remedy is widely available, we will not disclose further details about the flaw. For more information, contact Ed Felten at felten@cs.princeton.edu or 609-258-5906. Dirk Balfanz and Ed Felten Dept. of Computer Science, Princeton University http://www.cs.princeton.edu/sip/ ------------------------------ Date: Tue, 20 Aug 1996 16:37:09 -0400 (EDT) From: "Frank C. Ferguson" Subject: Computer Testing of Nuclear Weapons An AP dispatch out of Washington reported that IBM has won a government contract to build the world's most powerful computer -- an ultrasupercomputer that is 300 times faster than any machine now in use. Energy Department officials, who announced the $94 million contract, said the computer will be used to simulate nuclear explosions so the government can test atomic bombs without actually blowing them up. Energy Secretary Hazel O'Leary said the new computer will be a "dramatic leapfrog" over currently available technology. I guess eliminating an actual explosion, or "rapid disassembly" as the Air Force labels it, would reduce a lot of risks, wouldn't it? Hmmmm. I suspect we would soon be building weapons that would never work when the real trigger was actually squeezed. I also suppose the next logical step for the government would be to use the new computer to test nuclear devices without actually building them. Just think how much safer the world would be. It could also be used to compare military forces of the world to determine which country wins a war without actually fighting the war. WoW, just think of all the lives that would save. Secretary O'Leary sure has my vote. uh-huh! ------------------------------ Date: Wed, 21 Aug 96 9:43:24 PDT From: "Peter G. Neumann" Subject: Swiss address risks of holding referenda by Internet There have been recent proposals in Switzerland to hold their frequent national referendums via the Internet. The Swiss Federal Council has rejected a parliamentary request to pursue that approach, on the grounds that the computers and networks are too vulnerable to being sabotaged -- from almost any computer in the world. [Source: A Reuters item, 20 Aug 1996, noted by Dave Farber Subject: Re: Escaping software upgrade hell A summary of some additional points on this topic brought up by correspondents, identified by their initials, follows. I mentioned several very useful capabilities that are not yet widespread or going under common terminology. These are ranked below in order of abstraction, complexity, cost, and degree of penetration. (With increasing abstraction, greater cost and complexity, and fewer instances of its implementation.) Note that complexity and cost are issues that relate to both the design on one hand and use on the other. 1. Ease and seamlessness of switching between versions of code-- either forward during an upgrade or backward when incompatibilities or bugs arise. 2. "Shadowing" new code so that it is run and checked against old code before put on live. 3. Computer control over code versions based on automated reliability measurements-- software that actually controls when new versions are added after reliability tests are passed or failed. DCB mentions that old Multics and ARPA systems both "did real-time updates and rolled both forward and back between releases of components", with a lecture given on the subject once by Paul Stachour. This appears to be an instance of (1) and (3). JWB complains of the cost and complexity of the above and suggests implementing any of these features may actually make software more buggy in the long run, and that trying to get to previously stable states may actually be more difficult. SK says that (1) is apparent in disk controller hardware and calls it "hot swapping" of code. HT says that "everything you describe [1-3] is regular practice in telephony hardware and software" with the caveat that "you have to build systems from the hardware up for this kind of [reliability]". He describes a system by Nortel: "essentially, yes, the system runs in shadow mode all the time, often shadowing a release with a duplicate of itself, but sometimes shadowing on a different release." "Duplicated systems are complex and hard to do right - they're also the bread-and-butter of telephony companies everywhere. Unfortunately, we don't blow our own horn as much as we should." MS says that NASA is investigating on an ongoing basis modular hardware and software "hot swapping". Chorus Systems (France) is working with the European Space Agency and all components are "hot swappable". AF says "You have some great ideas here. I agree that a fundamental shift in the way we think has merit. I have personally designed a software system that is designed with field upgrade in mind from the ground up". He mentions that in "closed loop system" however, it is far more difficult to directly compare to versions of software because their results are supposed to be divergent. I pointed out in a reply that software to measure whether this divergence is moving in the "correct direction" (via shadowing) would also tend to minimize upgrade glitches (in cases where the upgrade actually degrades performance). SR writes that the real problem we are trying to solve is "incorrect software" and describes ways of improving software reliability before installing it. I am obviously in favor of doing that to the greatest extent possible, but feel that there is a point of diminishing marginal returns in which exponential labor is spent in trying to remove the so-called "last bug". It is this view I was generally rejecting in the essay, by starting from the idea that even well-intentioned designers who have done all the reasonable testing possible cannot catch every bug. In fact I think the terms "correctness" and "bugfree" are highly misleading as a black-and-white view of a gray situation. Is there a "correct" instance of a complex piece of software anywhere in existence? "Professionally, I feel that we need to ensure that software going out is as correct as possible" sounds as compelling to me as a politician stating, "I'm in favor of families." The major point of my essay was that even the best of engineers make mistakes that cannot be detected except in live use ("bugs happen"?), and paradoxically when we finally learn to accept that, instead of constantly pointing the finger at the engineer who made the mistake, we might begin to create systems that work in spite of them. I am increasingly skeptical of the claim applied to every bug, "the designers didn't test thoroughly enough"-- often it's too trivial an answer for too complex a situation. In other words, sometimes the best solution is building a better mousetrap, and other times it's figuring out how to make the existence of mice less catastrophic. The case I was focusing on my essay was the situation where software *previously* works, and then *fails* to work in the same way because of an upgrade. In this situation, *regardless* of designer fault, ideally the end-user would have enough control to fall back to prior base functionality gracefully. However, SR makes some very good points about methods of improving reliability prior to delivery, including considering the debugging and test code as an intrinsic part of the delivery, something I strongly advocate. In fact if clients increasingly began to ask prior to signing a contract, "what are you going to include in the system itself to catch errant behavior" instead of focusing on the resumes of the designers, we may all tend to be better off in the long run. WC mentions Ericsson's Erlang language containing some of these ideas, designed to support high-availability systems like telephony: http://www.ericsson.se/cslab/erlang/ In general, from responses I would conclude large Internet and cyberspace vendors may benefit significantly from studying techniques used in telephony to ensure continued and stable operation even during upgrades. Also, internet protocol designers may begin to look at telephony techniques for advancement in the bulging-at-the-seams internet network traffic. In fact the internet telephone software designers may eventually have to confront the same issues that telephone designers did many decades ago. Maybe future conference organizers in both areas might consider this aspect of promising cross-pollination. In fact we appear to be in the beginning stages of the heralded "convergence" in which telephony and internetworking are in the early pangs of learning from each other prior to a merge into a unified cyberspace, with AOL and others realizing they are becoming more and more like utility services. With everyone's hard work, hopefully in the end we'll get the best of both worlds (reliability of telephony, flexibility of the internet) instead of the worst of both (flexibility of telephony, reliability of the internet)! I am struck by the apparent lack of a single definitive reference on this critical subject. These techniques are spread out in aerospace, military, telephony, and other niche and proprietary areas and would benefit immensely from a systematic treatment by someone with a broad perspective. How about, "Creating Fault Tolerant Systems". It would be a great Yin to the Yang of the recent Risks book on the shelves by PGN. Any takers? [Wouldn't it be great if someone wrote a journal that contained instances of software and hardware etc. that correctly handled a very difficult problem or catastrophic situation due to the ingenuity of the code? studying it might actually improve the world even more than RISKS. VZN] ------------------------------ Date: Wed, 21 Aug 96 10:09:06 From: roger.hird@argonet.co.uk (Roger Hird) Subject: Re: London Train Crash (RISKS-18.34) I don't want to turn Risks Digest into a platform for political discussion on the privatisation of the British railway (BR) system, but there are background points worth making to put recent contributions in context. The original item (RISKS-18.32) was based on a thin US newspaper report and the contributor commented that Risks Digest had a number of examples of rail incidents in London. This comment needs to be put into context. These incidents involved three separate systems: British Rail (BR), what we think of in the UK as "the railways", London Underground - London's subway system - and Docklands Light Railway, DLR, a new, small, overhead system serving redeveloped dock areas in East London whose teething problems with systems and software have attracted perhaps excessive attention. BR, the national railway sstem has not had all that many accidents/incidents in the London area. Of the ones explicitly identified with London there was one major one, some years ago now, the Clapham (sic - it was actually nowhere near Clapham) rail disaster due essentially to faulty wiring in a signal display. This was entirely relevant to Risks Digest. But I don't remember many/any other system related incidents on the BR network in or near London. There have been mechanical failure incidents, eg brake failures, with one death. Alastair Scott's contribution (18.34) seems to be a fair summary of what is currently known about the train crash outside Watford. The comments on the apparently greater safety of the modern rolling stock design are interesting. There seems little doubt that, given the "black boxes", the survival of the drivers and the Health and Safety Executive enquiry (required by law in the case of any fatal rail crash, I believe) we should get reasonably reliable, and quick, conclusions about what went wrong. But I thought Jim Reid's contribution less helpful. I'm a rail traveller and share his scepticism about the privatisation of Britain's railway system and the incredibly complex structure of franchises, leases, service companies and operating companies and leasing companies which it has spawned. But that is no reason to suspend ordinary principles of objectivity. There was no indication from anyone, and no serious allegation, that any factor in the latest crash resulted from or was exacerbated by the privatisation of the British Rail system or doubts or uncertainties about responsibilities or boundaries between the various companies involved. There were lots of transient problems of that sort as new structures were put into place (I remember, in 1994, days of frustrating buck passing betwen station operators and train operators as I simply tried to find out who was responsible for property lost on one particular train - I never did get back that CCTA digest on writing information system strategies) but they were transient. There are still some issues of concern but I hear of few serious doubts that safety operations are not co-ordinated. Jim Reid's points, except for one unreferenced anecdote about how very junior staff handled a small fire, are simply assertions and expressions of opinion about what is happening and how the companies involved are behaving. The role of the Health and Safety Executive, which enquires into accidents, is clearly defined in statute. Non UK readers may also get the impression from Jim Reid that the absence of ATP is a consequence of cost-paring decisions by private companies. Clearly, decisions not to install ATP after privatisation are theirs, but they follow a long history of low investment in the state owned system. Given the system they bought it is not unreasonable that the new owners should, in making safety related investments, go for the best value. If the statistic quoted by Alastair Scott is valid then new rolling stock might well be a better investment, both in safety terms and commercially, than ATP. Again, Jim Reid's comments on the investment strategies of the new privatised system are as yet predictions, not descriptions. Finally, Jim Reid's comments on the ATP/rolling stock choice appear to query the concept that investment in risk avoidance can reasonably use cost benefit comparisons. Well, surely, that is what risk management is all about? We all do it individually, and qualitatively, in our decisions on safety in our homes and cars. Some people have formal risk management responsibilities: eg transport operators of all sorts, highway designers, power operators, air traffic control organisations. They have limited investment budgets and are constantly having to balance safety and commercial considerations in making their choices. They do this quantitatively and formally and, in the public sector usually on the record - and have done for decades. Assigning a value to lives potentially saved is part of this process. Risk elimination is ideal - but seldom affordable. Roger Hird roger.hird@argonet.co.uk http://www.argonet.co.uk/users/roger.hird [There were many other messages on this subject, e.g., from Pete Mellor , campbellp@logica.com (Peter Campbell Smith), Fieldhouse Dirk , Paul Smee , and Tim Sheen . Rather than try to pick out nuggets from each, I have selected the following note from Clive D.W. Feather. Perhaps those of you not included will add specific points if there are any remaining! PGN] ------------------------------ Date: Wed, 21 Aug 1996 10:52:22 +0100 From: "Clive D.W. Feather" Subject: Re: London train crash There were two long items in RISKS about this crash. Both, I am afraid, present a distortion of the facts. >What happened was that a train from Euston to Milton Keynes with about 400 >passengers on board was travelling North at about 60mph. Another empty >train, which should have been waiting at a red signal just outside Watford >Junction, started moving slowly South and crossed over a set of points onto >the North-bound fast line straight into the path of the fast train. Given the layout of track here, *if* the signalling was operating correctly, then this cannot be a correct explanation. The actual layout is roughly: >-----------------------*--------------------------------> NB fast \ <====================*----*---*--------------------------< SB fast \ A \ >----########>+S+++++++*=+=*++++*++++++++++++++++++++++++> NB slow Passenger \ <----------------------------*========T=<########--------< SB slow A Empty The intended route of the passenger train is shown as ++++, and that of the empty one as ====. The important point (sorry) is the pair of points labelled A. These always move together. If these points were set correctly for the passenger train and the empty train had passed signal T at Danger, then it would have run along the slow line and no accident would have occurred. But clearly AA (which is where the collision happened) was set for the empty train, and the passenger train passed signal S at Danger. *IF* the signals were working correctly, then the facts tell us that the route was set for the empty train, and the passenger train passed a Danger signal. It is still possible that the empty also passed a Danger signal, but we can't tell that from the evidence presented so far. >"Black boxes" were recovered from both crashed trains, and showed that >signalling and train systems were working properly just before the crash. In which case the passenger train driver is primarily at fault. >It was a huge stroke of luck that the collision involved new rolling stock: >on some other lines carriages are 30 or 40 years old and are of an antique >"slam-door" design which concertinas in the event of a crash. Concertinaing, as a problem, was largely eliminated in the 1950s and 1960s. While it is true that these were slam door trains, that is not relevant to the issues. > I have read >that, of about 70 deaths on the United Kingdom railways in the past 10 >years, all but one have occurred in these older carriages. Yes. But over half these numbers are due to a *single* event - the double collision near Clapham Junction. And there it was not concertinaing that caused the deaths, but the speed of the first collision followed by another train ploughing through the wreckage. And a good number of those left are due to people falling out of open doors. In other words, this statistic says nothing about the crashworthiness of these trains. [Regarding Clapham, please recall an item in RISKS-8.01 from Clive via Mark Brader, and another later one in RISKS-8.85 from Jon Jacky. PGN] >A number of risks have emerged from the recent crash. The first concerns >Automatic Train Protection, ATP. This system is claimed to stop any train >which passes a red signal. It is not used on the British railway network, >though it is deployed on other European railways. [Apologies to any >trainspotters for any simplification I've made.] The UK railway companies >say that ATP is too expensive - it costs too much for each life it saves, >though how they work that out is beyond me. (Another risk?) This is actually fairly simple to determine. The costs of complete ATP installation are known. A record is kept of all accidents and their causes, and so it is simple to determine how many lives and injuries would not have happened in (say) the last 20 years if ATP had been installed at each place. Divide. > They claim that >the money required for ATP would be better spent on other safety measures >like modern, stronger passenger carriages. So, rather than prevent trains >crashing into each other, they think the best strategy is to let them crash, >but make the rolling stock safer. (Yet another risk?) Not all accidents would be prevented by ATP - apart from anything else, ATP failure is an extra possible *cause* of accidents. The calculation is more like this (numbers are illustrative, not exact): ATP installation costs 400 million pounds and saves 10 lives. Anticlimber installation costs 30 million pounds and saves 15 lives. Which is the better spend ? Especially given that the government has reneged on its promise to fund ATP. >The next risk is the absurd way in which Britain's railways are now run >after privatisation. Unfortunately all too true. >For the leasing companies, repainting old trains is more cost >effective than buying new ones which presumably have better safety features. >For the operating companies, the cost of leasing is one of the few costs >they can control. [They can also work their drivers harder, but that will be >another safety risk.] Thus, they prefer to run old, less safe, trains >because they are cheaper to lease than new ones. Not necessarily the case. In particular, the leasing prices were set so that the cost of old trains was the same as that of new ones; it is claimed that this will encourage investment in new trains, as operators will prefer them. Clive D.W. Feather, Associate Director, Demon Internet Limited ------------------------------ Date: Wed, 21 Aug 1996 12:27:33 +0100 (BST) From: mpoole@uhea904.gb.ec.ps.net Subject: Re: London train crash There is one piece of information which does not seem to have made it into general circulation regarding the crash. The northbound train (with the passengers) was the 17:04 which left on time. The preceding northbound train (16:54) was delayed by over 6 minutes which meant the southbound (empty) train would have been delayed by at least that in crossing over onto the other line. Martin Poole, Perot Systems Europe mpoole@uhea904.gb.ec.ps.net ------------------------------ Date: Tue, 20 Aug 1996 10:24:30 EST From: "Rob Slade" Subject: "Authentication Systems for Secure Networks" by Oppliger BKAUSFSN.RVW 960608 "Authentication Systems for Secure Networks", Rolf Oppliger, 1996, 0-89006-510- 1 %A Rolf Oppliger %C 685 Canton St., Norwood, MA 02062 %D 1996 %G 0-89006-510-1 %I Artech House/Horizon %O 617-769-9750 800-225-9977 fax: +1-617-769-6334 artech@world.std.com %P 186 %T "Authentication Systems for Secure Networks" Given the relative scarcity of knowledge about data and communications security, it seems rather odd to find a security book which comes right out, first thing, and say that it is not intended to be tutorial. However, Oppliger does not spend much time on the basics. (There is a general introduction to security terminology and techniques, but only one chapter.) The emphasis of the book is on the explanation, review, and comparison of various systems for ensuring the security of communications within a network over which the security of physical links may be in doubt. The systems covered include Kerberos, NetSP (Network Security Program), SPX (Sphinx), TESS (The Exponential Security System), SESAME (Secure European System for Applications in a Multivendor Environment), and OSF DCE (Open Software Foundation's Distributed Computing Environment). Kerberos get the most space, probably since most of the rest are variously expansions or refinements of the basic Kerberos concepts. The examinations are detailed, although not to the level necessary for implementation, and the overview looks into individual strengths and weaknesses. A final chapter does a side by side comparison of the systems in terms of functions, cryptographic techniques, standardization, availability, and exportability. copyright Robert M. Slade, 1996 BKAUSFSN.RVW 960608 roberts@decus.ca rslade@vcn.bc.ca rslade@vanisl.decus.ca Author "Robert Slade's Guide to Computer Viruses" 0-387-94663-2 (800-SPRINGER) [For those of you who have not read it, please note the RISKS policy on the redistribution of copyrighted materials submitted to RISKS: see risks.info or risksinfo.html, cited below. PGN] ------------------------------ Date: 15 Aug 1996 (LAST-MODIFIED) From: RISKS-request@csl.sri.com Subject: Abridged info on RISKS (comp.risks) The RISKS Forum is a MODERATED digest. Its Usenet equivalent is comp.risks. => SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) if possible and convenient for you. Or use Bitnet LISTSERV. Alternatively, (via majordomo) DIRECT REQUESTS to with one-line, SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:] or INFO [for unabridged version of RISKS information] => The INFO file (submissions, default disclaimers, archive sites, .mil/.uk subscribers, copyright policy, PRIVACY digests, etc.) is also obtainable from http://www.CSL.sri.com/risksinfo.html ftp://www.CSL.sri.com/pub/risks.info The full info file will appear now and then in future issues. *** All contributors are assumed to have read the full info file for guidelines. *** => SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line. => ARCHIVES are available: ftp://ftp.sri.com/risks or ftp ftp.sri.comlogin anonymous[YourNetAddress]cd risks or http://catless.ncl.ac.uk/Risks/VL.IS.html [i.e., VoLume, ISsue]. The ftp.sri.com site risks directory also contains the most recent PostScript copy of PGN's comprehensive historical summary of one liners: get illustrative.PS ------------------------------ End of RISKS-FORUM Digest 18.36 ************************