RISKS-LIST: RISKS-FORUM Digest Monday 30 May 1988 Volume 6 : Issue 93 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator Contents: Westpac disaster revisited? (Dave Horsfall) Telecommunications redundancy (Chris Maltby) Plastic cash makes for a 'safe' society (Dave Horsfall) Re: Daedalus and Cash on Nail (Rudolph R. Zung) A Thumbnail Sketch of Daedalus: David E. Jones (John Saponara) More on programmed trading (Charles H. Buchholtz) Re: Computers as a weapon ? (Amos Shapir) Re: risks of automatic test acknowledgement (Carl Gutekunst via Mark Brader) The Israeli Virus Bet Revisited (Y. Radai) [long] The RISKS Forum is moderated. Contributions should be relevant, sound, in good taste, objective, coherent, concise, nonrepetitious. Diversity is welcome. Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM. For Vol i issue j, ftp kl.sri.com, get stripe:risks-i.j ... . Volume summaries in (i, max j) = (1,46),(2,57),(3,92),(4,97),(5,85). ---------------------------------------------------------------------- Date: Thu, 26 May 88 13:46:46 est From: Dave Horsfall Subject: Westpac disaster revisited? Readers with long memories will recall the disaster of Westpac Banking Corporation (Australia) going "live" with an essentially untested system. The May 23rd issue of Computing Australia has a brief report on their new system, the Accounts Opening and Customer Information (AOCI) system about to undergo three weeks of tests in Brisbane and Sydney. The AOCI system is part of a $120 million program to build a national, integrated new-generation computer system [buzz-word generated?] called Core System 90 (CS90) for Westpac. Strangely enough, the article does not mention the aforesaid disaster, but one would hope this system will receive a lot more testing than the last one did... Stay tuned for further details. Dave Horsfall (VK2KFU), Alcatel-STC Australia, dave@stcns3.stc.oz dave%stcns3.stc.OZ.AU@uunet.UU.NET, ...munnari!stcns3.stc.OZ.AU!dave ------------------------------ Date: 29 May 88 19:58:23 [GMT+1000] (Sun) From: munnari!softway.oz.au!chris@uunet.UU.NET (Chris Maltby) Subject: Telecommunications redundancy Organization: Softway Pty Ltd, Sydney, Australia What no-one is talking about in the Chicago exchange fire is whether society (i.e., the government) has a role in ensuring adequate redundancy in as important a strategic network as the telephone system. The commercial decision made by the telephone company to cut staff in exchange for the risk is just that: a commercial decision. The decision to route all the trunks through the same building is also a typical commercial decision. The result of a concentration on getting the price down to compete in the market is a trade off in services - and that means a much greater exposure to this sort of risk. There is obviously a higher purpose which is being ignored in all this commercialism, but just whose function is it to impose decent standards on the poor old commercial phone companies. Should it be yet another government regulatory bureaucracy, or should the phone companies be made liable for damages caused by their failure to provide an essential service? Can we put up with higher phone bills? Just who is benefitting out of all the phone company cost cutting anyway? This is all rather academic to me, as Australia has a centralised government-owned phone company (which only last week was converted from a statutory authority into a limited company). I think the issues are just as important for Australia as well as the US, as we head away from the amorphous Posts and Telegraphs Department system where the cost of a fully redundant network was just absorbed, towards the strict focus on the bottom line. Chris Maltby - Softway Pty Ltd +61-2-698-2322 ------------------------------ Date: Thu, 26 May 88 13:36:25 est From: Dave Horsfall Subject: Plastic cash makes for a 'safe' society From "Computing Australia", 23rd May, 1988: "Plastic cash makes for a 'safe' society" A cashless society was a safer society, an expert on electronic funds transfer told last week's ANZAAS conference in Sydney. Marie Keir, from the secretariat of the Australian Science and Technology Council in Canberra, said EFT meant shop assistants were less vulnerable as they had little money in the cash register. "The amount of cash to be handled and transported safely has also been a high expense to retailers and banks," Keir said. "EFTs have therefore been promoted as a form of payment to reduce handling costs. The effect of these changes has been a movement to a society which is largely cashless; one which deals with electronic information instead of notes, cash and cheques. All very laudable, but I'm concerned that this is one more excuse to plunge headlong into a system with no standards, where the customer's rights are ill-defined, and where naturally nothing can possibly go wrong, go wrong, go wrong, go..... Dave Horsfall (VK2KFU), Alcatel-STC Australia, dave@stcns3.stc.oz dave%stcns3.stc.OZ.AU@uunet.UU.NET, ...munnari!stcns3.stc.OZ.AU!dave ------------------------------ Date: Thu, 26 May 88 13:28:43 -0400 (EDT) From: "Rudolph R. Zung" Subject: Re: Daedalus and Cash on Nail Is this for real? I would hate to have my transactions limited by the size of my nails. In case you haven't noticed: some people have really short nails, and others have really long nail. People involved in manual labor may also have their nails scratched. Oh my. Anyway, a cashless transaction system is already in place in Singapore. Interestingly enough, the government subsidised Post Office Savings Bank (POSB, and somehow related to the Post Office, though I have no idea why this is so) issues their own brand of ATM cards. Under the direction of the government, they introduced cashless transactions. Most of the major stores in Singapore have a terminal for this service. The total charges for purchases and/or services are rung up, and punched into this machine. The cashier then hands you a keypad (which has high sides to as to prevent people from peeking at what you're typing in) and the keypad's display shows you how much has been rung up. It also asks you "Account?" To which you tell it whether you want to debit from your savings, checking or whatever account. Having done that, it asks for "PIN?" (Personal Identification Number). This is the same number that you would use to get money out of an ATM. I suppose this remote terminal thingy then calls up the bank and verifies everything (just like an ATM probably would). It may sometimes "Transaction Denied" (insufficient balance, cannot contact back's computer, who knows) or "Transaction Approved", in which case the money is debitted from your account immediately (again just like and ATM, except no money comes out physically.) You then get your little receipt and everybody is happy. Notice that the cashier does not get to find out what account you paid from, nor your PIN. All the cashier knows is that you're using a bank card to pay, and how much you paid for the purchase. It's very neat and handy, and convenient. I haven't heard of any frauds from using that system so far, so I would assume that it is safe. (Standard ATM safeguards should apply.) ...Rue ------------------------------ Date: Thu, 26 May 88 14:53:40 EDT From: saponara@tcgould.TN.CORNELL.EDU (John Saponara) Subject: A Thumbnail Sketch of Daedalus (Eric Haines) Organization: Cornell Theory Center, Cornell University, Ithaca NY There seems to be a little confusion about whether the Daedalus column about recording financial transactions on the thumbnail was a joke or not. It was. The author is actually David E. Jones, a freelance consultant who brainstorms for a living. He also writes the "Daedalus" column for "New Scientist" magazine in Great Britain. This column is usually about some strange, humorous concept that his friend Daedalus is working on at the bustling, mythical Dreadco research labs (whose motto is probably "A Growing Concern"). What's interesting is that about 17% of his ideas have been seriously studied as possibilities by various groups. For example, a method of building prototypes based on shining UV light into a vat of photopolymer was just described in the May 1988 issue of "Computer Graphics World". This idea was presented back in 1982 through "Daedalus". Anyway, I highly recommend David Jones' hilarious, thought provoking book "The Inventions of Daedalus", which is a collection of these columns. -- Eric Haines (not John Saponara, no matter what the mail header says) ------------------------------ Date: Thu, 26 May 88 03:48:57 edt From: chip@eniac.seas.upenn.edu (Charles H. Buchholtz) Subject: More on programmed trading Since people are still talking about programmed trading, I thought I'd pass this along. When I first started reading about programmed trading in RISKS, I asked a financial analyst friend of mine what she thought. She replied that there are (at least) two types of programmed trading: arbitrage and portfolio insurance. What follows is my understanding based upon her comments; I hope that someone more familiar with finance would correct my inevitable misconceptions. Arbitrage involves monitoring two or more prices (usually of related items in different markets), and responding quickly to small fluctuations. This acts as a stabilizing force in the market; when price A drops relative to B, B is sold and A is bought, which acts to raise the price of A and lower the price of B. This is a computerized application of the adage, "Buy low and sell high". Portfolio insurance attempts to assure a reasonable rate of return on a portfolio of diverse investments. The portfolio insurance program sells when the price drops, and buys when the price rises, in an attempt to get out of failing markets and into rising ones. This "Buy high and sell low" philosophy acts to reinforce market movement. It works as long as the volumes traded are small enough not to effect the prices involved. If too much of the market is traded according to this system, chaos will result. ---Chip Disclaimer: U. of P. doesn't even know I'm writing this, and I'm sure the folks at Wharton know much more about this than I do. ------------------------------ Date: 26 May 88 14:18:03 GMT From: nsc!taux01!taux01.UUCP!amos@Sun.COM (Amos Shapir) Subject: Re: Computers as a weapon ? The computerized roadside ambush operation to catch tax evaders was not designed especially for the occupied territories; it started in Israel itself a few years ago. Its main goal is to catch businesses without a formal address, such as free-lance taxi drivers, etc. All the sources of risk presented in the article as aspects of 'life in an occupied land' (such as the connections among government data bases, and the requirement to carry an id card), are also imposed on Israeli citizens. Many western democracies use similar methods. Amos Shapir, National Semiconductor (Israel) ------------------------------ Date: Wed, 11 May 88 17:50:32 EDT From: msb@sq.com (Mark Brader) Subject: Re: Risks of automatic test acknowledgement Originally-from: Carl S. Gutekunst The unmoderated nature of Usenet sometimes leads to people doing silly and destructive things, but the following, I think, is a new one. There is a newsgroup "misc.test" on Usenet for test postings, as well as a variety of local-area or restricted-distribution newsgroups that are used for the same purpose whenever such distribution will suffice for what is being tested. Now, many sites operate automatic acknowledging programs that attempt to send mail to anyone posting an article in these groups, so that the poster knows where their test message reached. With this build-up, you can probably guess what's coming. The following article was posted by Carl S. Gutekunst: >A 2300-line message was posted to misc.test (and cross posted to talk.bizarre) >by 22116@pyr1.acs.udel.EDU that refers to itself as the "misc.test digest." It >contains the complete text of all the misc.test messages posted within the >past month or so, a total of 107 articles. This awesomely stupid menuever was >topped by pst@comdesign.UUCP reposting the same message to alt.test. > >The posting of two 68 Kbyte messages to test groups is trivial compared to the >effect of all the echo reflectors out there. Every one is forwarding the damn >postings back to the sender. Worse, at least one standard reflector script, >Erik Fair's, echos mail to *EVERY* "Path:" line in the test article. Since the >article contains 109 Path lines, we mailed the 68K posting to all 109 of them! > >We have broken the UUCP link to comdesign, and are trashing every copy of the >test message that we can find. Unfortunately, nearly all of them already went >out during the night, and we apologize to all of you who found this monster in >your inbox this morning. Other sites, especially those running echo reflec- >tors, should survey their own spool partitions and squash as many of these as >they can. ------------------------------ Date: Mon, 30 May 88 17:32:51 +0300 From: Y. Radai Subject: The Israeli Virus Bet Revisited This is to report on the results of the "virus bet" which was made on an Israeli television program at the beginning of April (see RISKS 6.62). Although the outcome was already announced in RISKS 6.84 by Amos Shapir, the story is much more involved than what was described there. (In fact, it was not quite accurate to describe the outcome as a draw.) Since I think the details will be of interest to some readers, I am offering the following more complete report. As will be recalled, the bet originated when a pair of students, Yuval Rakavy and Omri Mann, who had previously written and freely supplied software to de- tect, prevent, and eradicate the four known viruses which had invaded IBM PCs in Israel, had now written a program which they claimed could detect infection of a disk by *any* virus under PC-DOS or MS-DOS. Interviewed on television on April 4, they were unexpectedly confronted by the director of an established software house, who challenged the students to a bet on the correctness of their claim, for an amount equivalent to about $6200. Since the names of the persons and companies involved are unlikely to be of much interest to the non-Israeli read- er, I shall refer to the authors of the program as the "defender" and to the challenging director as the "attacker". In the agreement which was drawn up on April 27 between the two parties it was stated that the defender "claims that he has a method of detecting the propagation of any virus", where a virus is "software that reproduces within a computer and between computers." The attacker, on the other hand, "claims that the method which [the defender] presented is not good against every virus." Both parties were required to submit to the referees by May 4 flowcharts and written descriptions of their algorithms. The attacker had also been supplied on April 10 with a Beta version of the defender's program (in non-disassemble- able form). Unfortunately, certain key points were not spelled out in the agreement. First, the terms 'method', 'detection', 'propagation' and 'reproduction' were not defined, and the correctness of the claims could depend on the meanings assigned to these terms. More important, it was not specified in the agreement precisely what would have to occur in order for the referees to declare that the attacker or the defender had won the bet. Presumably it would be agreed that if the defender's method failed (due to a shortcoming of the method as opposed to a mere bug in the program) to detect the propagation of one of the attacker's viruses, it should be concluded that the defender had lost. On the other hand, even if the defender's program succeeded in detecting propagation of all viruses submitted by the attacker, this would not prove that "he has a method of detecting the propagation of *any* virus". Taken literally, it would seem that the defender had no possibility of winning. Of course, the reasonable position would be to declare the defender victorious in such a case. However, the situation was complicated by the introduction of a theoretical aspect. The defender's insistence on the word 'method', rather than 'program' or 'software', in the agreement was partly in order to express the fact that the Beta version of his program might contain a bug, and partly in order to justify submission of a theoretical proof that the method on which his program is based guarantees detection of the propagation of any virus under certain specified assumptions. Just how submission of this proof affected the criterion for victory is not spelled out in the agreement. Did the defender's proof have to be certified as correct and complete in order for him to win? Did he have to win on *both* the empirical and theoretical fronts or on only one of them? Was it possible that *both* parties could be declared losers? I think that if an effort had been made to obtain agreement on these and all similar questions in advance, the bet would have been much fairer, and perhaps one of the parties might even have de- cided that there was no point in continuing with the bet. In any case, when the attacker's viruses were tested, their propagation was detected by the defender's program in every case. However, the outcome, as decided by the two referees on May 8, was not only that the attacker had lost, but so had the defender! (The referees emphasize that this is *not* the same as a draw.) Their arguments were as follows: On the one hand, they admitted that under certain conditions users of the defender's method "indeed gain a defense which makes it difficult for viruses to penetrate the system" and that the attacker "did not succeed in proving unambi- guously that the *method* which [the defender] presented is not good against every virus." On the other hand, they contend that the defender's claim is substantiated "at most in a work environment which is very restricted and limited by heavy constraints" and that the viruses created by the attacker "were very effective, and succeeded in penetrating the defense ... in situations in which not all the (generally impractical) safety rules required for protecting the system were ob- served." And what are these impractical rules? The only clue we get to this is in the following passage: "... only immediately after booting ..., could a long series of operations be performed without fear of infection by a virus ...." However, rebooting is recommended in the defender's method (in certain situa- tions) only in order to *prevent* infection, whereas the subject of the bet was *detection* of infection. And even if rebooting were necessary for purposes of detection, while this would certainly be an extremely important *practical* con- sideration, for purposes of the *bet* it would be entirely irrelevant. I there- fore find the referees' mention of this point in their decision to be extremely peculiar. Another passage in the referees' decision which is quite peculiar is as follows: "There is, in our humble opinion, at least one method which can breach the defense ... and due to lack of time and lack of will to create a virus, we have declined to implement it." Just what this method is they categorically refuse to state, not only in their public decision but even privately to the defender. (Imagine a trial in which a judge admitted that the prosecution had produced no valid evidence, but nevertheless found the defendant guilty on the grounds that the judge claimed to possess evidence of his own which he refused to reveal!) Under the circumstances the referees' declaration remains complete- ly unsubstantiated and can hardly serve as a legitimate basis for a judgment against the defender. A point which apparently influenced the referees strongly was the fact that after the agreement was signed, the defender modified his program, not only to fix what were clearly bugs, but also (in the referees' words) to make a change "in the region between correction of a programming error and updating of the de- fense method" in order to detect a certain type of virus (one which depends on a certain peculiarity of DOS which I shall not reveal here); as a result the pro- gram detected propagation of one of the attacker's viruses that would otherwise have gone undetected. The referees state that "this process [of improving the program each time a new virus goes undetected] is, in our opinion, infinite". On the other hand, the defender states that he thought of this improvement him- self and not as a result of the attacker's virus, and he claims that it does not constitute a change in his *method*. Whether this is correct or not depends on what is meant by a 'method'. In any case, the defender replaced his software on May 4, the day on which the two parties were required to present their flow- charts and algorithms and the attacker to present his viruses. Given this fact, it seems to me that even if this is construed as a change in method, this should not have counted against the defender. The referees conclude: "The declaration that it is possible to detect *any* virus is irresponsible, borders on misleading the public, and stems perhaps from a naivety according to which the mechanisms of action of a virus must fulfill a set of assumptions which [the defender] makes, assumptions which were not always found to be justified." Here there is much to comment on. This is the only place where the referees (apparently) refer to the defender's proof. However, they do not point to any error in any step of that proof. I would understand if they expressed skepti- cism concerning the defender's claim that he supplied an airtight proof covering all possible cases. However, their complaint is with the assumptions. But *which* particular assumptions are "not always justified"? *Why* are they un- justified? On these questions the referees remain as silent as on their myster- ious method for breaching the defense. Moreover, it is not at all clear on what basis it could be decided that a given assumption is not justified, considering that many of the assumptions are simply part of the defender's method. Secondly, the charges of irresponsibility and misleading the public (the phrase sounds as if such action was deliberate and malicious, and was played up by the press) are extremely harsh under the circumstances; not only has it not been demonstrated that the defender's claim is false, but even if this is as- sumed for sake of argument, the referees themselves admit that the defender's claim may stem from mere naivety. Taken together with the previously mentioned peculiarities, these charges raise a certain suspicion that the referees were not entirely objective in their decision. It must be added that they tried to dissuade the defender from accepting the bet in the first place. Given their approach, this was certainly fair on their part. However, there is reason to suspect that once the defender declined their advice, the negative verdict was practically determined in advance. Incidentally, I attempted to interview one of the referees, both in order to obtain an explication of what precisely would have had to occur in order to decide that the defender had won (if indeed there was any such possibility!) and also in order to obtain their reactions to the peculiarities mentioned above. However, he was very uncooperative, refusing to elaborate on anything beyond what was printed in the official decision. In conclusion, I think that there was much injustice in the decision, and yet much was learned from the challenge, not only in perfecting the defense against less obvious types of viruses, but also in revealing the RISKS involved should anyone else feel inclined to take on a challenge of this sort (cf. Dennis Director's invitation in RISKS 6.79). Y. Radai, Computation Center, Hebrew Univ. of Jerusalem, RADAI1@HBUNOS.BITNET P.S. The opinions expressed above on the referees' decision are based on the evidence available to me at the time of writing. Moreover, they do not necessarily reflect those of the Hebrew University or of anyone other than myself. ------------------------------ End of RISKS-FORUM Digest ************************