Subject: RISKS DIGEST 18.55 RISKS-LIST: Risks-Forum Digest Weds 30 October 1996 Volume 18 : Issue 55 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator ***** See last item for further information, disclaimers, caveats, etc. ***** Contents: S-Bahn stopped by new switching software (Debora Weber-Wulff) Privacy: Bring back ticker-tape for the next N.Y. parade (Bruce R Koball) Child Pornography Hoax (Edupage) Risks of taking porno spam at face value (Pete Mellor) Beating the GRE: What time zone are you in? (from Manny via Dave Farber) Leonard Levine and Computer Privacy Digest (Peter G. Neumann) A new use of a new crypto attack (Jean-Jacques Quisquater) Re: A new attack on DES (Tony Lauck, Walt Farrell) Characterization of Research (William Hugh Murray) Re: $850 Million Social Security Problem (Mark Brader) Re: Franklin National Bank (R Ken Brown) Re: When is -32768 != -32767-1 ? (Mark Brader) Wasted redundancy (Ian Brogden) Abridged info on RISKS (comp.risks) ---------------------------------------------------------------------- Date: 27 Oct 1996 15:07:50 GMT From: weberwu@tfh-berlin.de (Debora Weber-Wulff) Subject: S-Bahn stopped by new switching software The Berliner Tagespiegel reported this week on the new light-rail switching software that was installed the same weekend that the light rail (S-Bahn) was moved back from the regular train track to its own tracks, which had been under repair for some time. The tracks were cut off all day Saturday and Sunday with busses attempting to move passengers. The software is installed at a central switching board, so that the transportation company can save the money they would otherwise pay real people to manually move the switches. The software kicked in, and Monday all went well until rush hour hit - yep, you guessed it, a stack overflow, just like in Hamburg. And it is the same large German company that was responsible for the Hamburg fiasco that wrote this software - it may even be the exact same software. Will they ever learn to *do* quality assurance at this company and not just talk about it? It took hours to get the system back up. The newspaper quotes (ironically?) a spokesman as saying that the software control was very modern since there is only one point at which it can go wrong. Of course, if that single point goes wrong... I spoke this morning with a nameless higher-up at the transportation company, who just said "Software always has errors. We're just happy that no one gets killed when the software fails." He wouldn't believe me, when I said that we have ways of ensuring that software won't fail. Debora Weber-Wulff Technische Fachhochschule Berlin, Luxemburger Str. 10, 13353 Berlin Germany weberwu@tfh-berlin.de ------------------------------ Date: Wed, 30 Oct 1996 08:07:49 -0800 (PST) From: Bruce R Koball Subject: Privacy: Bring back ticker-tape for the next N.Y. parade As most of the known universe is aware (or soon will be, c being a constant), the New York Yankees won the World Series this year... so, yesterday, the Big Apple honored them with its traditional tribute, a ticker-tape parade... a modern twist to the event, however, is noted in the following excerpt from *The New York Times*, 30 Oct 1996. > Ticker tape being an obsolete relic, people hurled everything from > shredded paper to confetti to toilet tissue. In fact, several > government agencies across from city hall dumped entire boxes of public > and confidential records out the window, without troubling to shred > them. Down came checks issued by the New York City Housing Authority > and records of unemployment checks from the New York State Department > of Social Services. ...which goes to show that the risks to the privacy and confidentiality of public record systems are not engendered solely by computers and telecommunications technologies... ultimately they begin with people... [This should be no surprise to long-time RISKS readers. In RISKS-3.90, 30 Oct 1986, ten years ago to the day, I noted essentially the same scenario after the New York Mets won the 1986 World Series. PGN] ------------------------------ Date: Thu, 24 Oct 1996 14:04:07 -0400 (EDT) From: Edupage Editors Subject: Child Pornography Hoax (Edupage, 24 Oct 1996) The FBI is saying that the recent widely distributed e-mail message inviting recipients to buy child pornography is a hoax; the message was apparently sent from New York City. (*Ottawa Citizen*, 23 Oct 1996, A4) ------------------------------ Date: Tue, 22 Oct 96 11:32:07 BST From: Pete Mellor Subject: Risks of taking porno spam at face value The following is the text of a letter which I faxed to The Editor, London Evening Standard, today:- As the recipient of two different versions of the e-mail message supposedly advertising child pornography, I (like many others) at first assumed it was genuine and complained to our e-mail postmasters. Although they take this sort of misuse of the Internet very seriously and are investigating it, they pointed out that the message is extremely unlikely to be genuine, particularly since it contained the name and home address of an individual. This opinion is shared by UKERNA, the authority responsible for the academic network in this country. It is probably a ``spam'' or hoax in which a message is sent by a malicious person in such a way that it appears to come from someone else. The motive was probably to discredit or to cause other damage to the individual named in the e-mail message, damage which the article in your edition of 21 October has aggravated, since it actually names this person. A little elementary fact-checking would have been advisable. (I was also amused to see that the messages I received came from electronic addresses on the America On-Line (AOL) service provider. AOL achieved fame a short while ago by allegedly refusing a subscription from a user on the grounds that four of the letters in the name of his home town constitute a rude word! [RISKS-18.07 and 08]) Peter Mellor, Centre for Software Reliability, City University, Northampton Square, London EC1V 0HB, UK. Tel: +44 (171) 477-8422, Fax: +44 (171) 477-8585 ------------------------------ Date: Mon, 28 Oct 1996 18:40:50 -0500 From: Dave Farber Subject: Beating the GRE: What time zone are you in? (from Manny) This scheme is pretty clever! Of course, it doesn't address the East Coast market, but one can take the tests in Europe too. I took the computer science GRE in Switzerland (got a perfect score too :-). That's a six hour time difference! It never occurred to me to make notches in my pencil to record the answers ... I'm not sure what the charges are for prosecuting such a thing. Kobayashi's test-takers' answers are not copyrighted by ETS: the test takers are not given the answers together with a copyright notice. Manny College Test Cheaters Arrested Federal prosecutors in New York say a California man has been arrested on charges of operating an elaborate scheme to help prospective graduate school students cheat on admission exams. Prosecutors say George Kobayashi ran a company that provided test-takers with correct answers in code on pencils that the students carried into the exam with them. Authorities say Kobayashi paid a team of experts to take each exam in New York City using assumed names. These experts then telephoned the correct answers to Kobayashi's office in Los Angeles, where the answers were quickly coded onto pencils by Kobayashi's employees and then provided to the cheating students. The scheme worked because of a three-hour time difference between the cities. ------------------------------ Date: Tue, 29 Oct 96 11:37:26 PST From: "Peter G. Neumann" Subject: Leonard Levine and Computer Privacy Digest For those of you who are recipients of the Computer Privacy Digest (comp.society.privacy) and are wondering why you have not received any issues lately, the moderator, Leonard P. Levine, just called me to tell me that he had a heart attack on 13 Oct 1996, followed by a successful bypass operation. He is recovering nicely and wants you to know that the Digest will eventually continue. He insisted that he would like to be left alone (please, no e-mail or telephone calls or faxes to him or to the Digest until further notice), although he said he would not be averse to postcards (Leonard Levine, Dept of EECS, University of Wisconsin-Milwaukee, Box 784, Milwaukee WI 53201) if postmarked no later than 13 Nov 1996 (to stave off readers of the RISKS archives many moons from now). There are two interesting RISKS connections. While on the gurney being wheeled into the operating room, Leonard observed that on his chest was a piece of equipment from Marquette Electronics that had been designed by his students. (I presume he took great personal pleasure in the fact that it worked correctly.) He also says he learned a lot more about the medical privacy issues. ------------------------------ Date: Thu, 24 Oct 1996 00:08:55 +0200 (MET DST) From: Jean-Jacques Quisquater Subject: A new use of a new crypto attack (Re: Shamir, RISKS-18.54) Research announcement: Short cut for exhaustive key search using fault analysis: Applications to DES, MAC, keyed hash function, identification protocols, ... Jean-Jacques Quisquater, UCL Crypto Group, Louvain-la-Neuve, Belgium, jjq@dice.ucl.ac.be 23 Oct 1996 (ABSTRACT and DRAFT) I confirm that the timing attack was well known to designers of smart cards for some time. --- Jean-Jacques Quisquater, Dec. 20, 1995, sci.crypt I'm a bit puzzled by the excitement over error analysis attacks -- they've been known for some time to cryptosystem implementors ... --- Paul Kocher, Oct. 20, 1996, comp.security.misc How to find a secret key faster than the exhaustive search without the help of the differential analysis. --- This paper During the last months very interesting programs, papers and announcements were released about the (cryptanalytic) use of transient faults in tamper resistant (or proof) devices by: - some well-known anonymous authors (in the payTV context; SFS); - Anderson and Kuhn (applications well fitted to the real world); [See Ross Anderson's comments in RISKS-18.52. PGN] - Boneh, Demillo and Lipton: they specifically attack public key cryptosystems; their core attack bells an alert to the scientific community to publish faster; - Biham and Shamir: they described how to obtain a secret key (e.g. for DES) using few ciphertexts. This list is open, and we reserve some room in this paper for the - future unknown authors. Here we describe a new use of such attacks in order to accelerate exhaustive keysearches in several contexts. We don't discuss if these attacks are feasible: our main goal is to enumerate all possible attacks and their cryptanalytic use against specific models. Knowing that will improve our trust about the current or future devices in order to obtain a reasonable level of security in a complete system. Introduction We suppose that the opponent is in possession of the secure device, able to know the (external) inputs and the outputs and to apply some physical constraints in order to trigger some transient errors at some random locations (RAM, registers, ...). We suppose that these errors do not interfere with the program used by the computations: these errors only modify some data at some stage of the computation. We do not here discuss the possibility of permanent errors: it will be explained in the full paper (incremental permanent errors with possible use of several secure devices, ...). They are many protocols where the input message and the corresponding output message are accessible to everybody including the opponent if the device is physically in the hand of the user (maybe for some short period of time): Let f be a public cryptographic function, computed by some secure device (smart card, secure black box, security hardware, ...), k a secret key, stored by the secure device, guessing its value is the goal of the opponent in this paper, n the number of bits of the key k, K is the set of all keys for f, N the number of possible keys in K, m an input message, c an output message. General protocol: input: m output: c = f(m, k) Examples: - encryption of m by any secret key algorithm (DES, IDEA, ...), - decryption of m by any secret key algorithm (DES, IDEA, ...), - computation of the hashing of m by the keyed hash function f (MAC, ...), - m is a random number used by some identification protocol, - ... . Such protocols need very often some protections against possible abuses from (well-chosen by the opponent) messages m (see Biham-Shamir, Matsui, Vaudenay, ...) or not so random numbers. One necessary condition is to avoid the discovery of k by exhaustive key search: such a general search algorithm is now described. Key search algorithm: Given m, c Enumerate all candidate keys i from K Compute f(m, i) = c_i If c_i = c then (output i and stop) End of loop. Mean work factor: N / 2. Indeed, if we suppose that the key is unique for each pair (m, c) (it is not true for DES: they are sometimes collisions) then the number of computations of f is N/2 for the mean case and N for the worst case. The goal of this paper is to show how to improve such a complexity by the use of (randomly activated) transient faults in the secure device. Model 1: Single fault in the secret key - Working hypothesis: the opponent is able to modify at a random location one bit of the key k (the new transient key is then k*) and to get the correct result of f(m, k*); after the reset of the secure device by the opponent, the internal secret key is again the correct one k. We suppose that the random modification is equidistributed on all n bits of the key k. - Key search algorithm: given m, 1. Physical attack of the secure device: Obtain the n possible pairs (m, c_j) where c_j is equal to f(m, k_j); k_j is the modified key k with the jth bit being flipped; We need about n*log n "questions" to obtain the n different pairs (by the coupon collector paradox: in some way the dual of the birthday paradox), that is, if f is the DES for instance, about 300 accesses to the secure device. 2. Enumerative key search on an external key search machine: Given m, the c_j's Enumerate (pseudo-) randomly all candidate keys i from K Compute f(m, i) = c_i If c_i = one of c_j's then (output i^*=i and stop) End of loop. 3. Key correcting algorithm on an external computer: Given m, i^*, c, Enumerate the n values i coming from i^* with one flipped bit at every n possible positions Compute f(m, i) = c_i if c_i = c then (output i and stop) !the secret key is found! End of loop. Mean work factor: (N / (2*n)) + n. Indeed, the first step is very fast, the second step needs the mean work factor of the exhaustive key search divided by the number of bits of the key (if we suppose that the modified keys are randomly distributed in the key set K) and the last step needs indeed n computations of f. For DES it means about 2^49 computations: let us recall that one FPGA device in one proposed implementation (see van Oorschot and Wiener) is able to do about 2^26 computations of DES, with key change, each second (using a pipelined implementation it is possible to compute a DES at each clock tick and we here suppose a very possible clock of 65 MHz). It means that one secret key will be recovered by such a small , accessible and inexpensive machine in 2^23 seconds, that is, less than 4 months. With p FPGAs working in parallel that time will be divided by p. The comparison operation in step 2 needs a modification of such a machine (a very easy step in software): it is a simple modification and thanks to the paper [Desmedt, Quisquater, EUROCRYPT '87] it is possible to implement it in the case of many c_j's without any large expense and using a very simple hash (non cryptographic) function. Model 2: Multiple faults in the secret key - Working hypothesis: The opponent is able to modify at random locations one or several bits of the key k (the new transient key is k*) and to get the correct result of f(m, k*); after the reset of the secure device, the internal secret key is again k. We suppose that the random modifications are equidistributed on all n bits of the key k. The main idea is that the opponent is able with a high probability to change randomly few bits of the secret key. It is easy to adapt the key search algorithm in that context. The complexity of the attack is directly related to the number of modified keys. The step 3 is also easy: if the key change is too large in relation to the number of flipped bits, it is sometimes necessary to skip the search and to begin a new one. Model 3: Attacking several secret keys in parallel using several secure devices It is easy to see that the first secret key will be found by a number of computations equal to the number needed for the two first models divided by the number of secure devices used in parallel. It means a very fast discovery in case of a massive attack. In the complete paper we will explain - how to filter efficiently the noise (transient errors with useless output), - how to combine such an attack with the one by Biham and Shamir, - how to resist to these attacks without expensive computations by the secure device, - how this attack is useful to know for public key cryptosystems. Conclusion We describe a new use of the attack by transient fault in a secure device: without any new protection and if this attack is feasible it means that a secret key will be obtained by about N / log (N) computations 2 (or less!) instead of N/2 computations by the normal exhaustive keysearch. In that case this attack is really shortening your keys. ------------------------------ Date: Mon, 21 Oct 1996 18:53:33 -0400 From: Tony Lauck Subject: Re: A new attack on DES (RISKS-18.54) In the early days of DES, the US government called out another standard for use in conjunction with DES, Federal Standard 1027. As I recall, this standard called for redundant hardware or other means of ensuring that hardware malfunctions could not affect accuracy of encryption or decryption. FED-STD 1027 was superseded by FIPS 140. The FIPS are available at http://www.nist.gov/itl/lab/fips/, where the version is FIPS 140-1, "Security Requirements for Cryptographic Modules". I quote: "Level 4 also protects a module against a compromise of its security due to environmental conditions or fluctuations outside of the module's normal operating ranges for voltage and temperature. Intentional excursions beyond the normal operating ranges could be used to thwart a module's defense during an attack. A module is required to either include special environmental protection features designed to detect fluctuations and zeroize critical security parameters, or to undergo rigorous environmental failure testing that provides a reasonable assurance that the module will not be affected by fluctuations outside of the normal operating range in a manner that can compromise the security of the module." It seems reasonable that NSA knew of Differential Fault Analysis in the 1970's. Tony Lauck tlauck@cerfnet.com ------------------------------ Date: Tue, 22 Oct 96 09:34:35 EDT From: "Walt Farrell" Subject: Re: A new attack on DES (Shamir, RISKS-18.54) I found this report interesting, but I think that it should have referred to "random" rather than "unknown" cleartext messages. Since the attack requires that the attacker have physical possession of the encryption device and the cleartext messages (in order to encrypt them multiple times) clearly the attacker has the capability of "knowing" the cleartext contents directly. To work with truly "unknown" cleartexts, I would consider only attacks that dealt with just the cyphertext and didn't require access to the cleartext. Walt ------------------------------ Date: Wed, 23 Oct 96 07:14 EST From: William Hugh Murray <0003158580@mcimail.com> Subject: Characterization of Research (Re: Shamir, RISKS-18.54) Last night I visited the National Cryptologic Museum. I got to click the wheels on an Enigma and was reminded of the story that the British had bugged a room in France and got information about the key from listening to the clicks when the key was entered. Bruce Schneier reminds me that there are more ways to stress a crypto engine than are dreamed of in my philosophy and that any phenomenon that can be observed from outside the engine that vary with its operation can leak information about the key. Jerry Leichter (*) reminds me that hiding these things is a constant battle and that is certainly confirmed by my experience. [* Spelling FIXED in Archive copy.] The cryptographers can help by properly characterizing their work. As I understand the work reported by Biham and Shamir, it demonstrates that the recent work reported by Bellcore and and which they admitted that they had only demonstrated against implementations that implement algebraic algorithms can be extended and applied to Feistel algorithms. In both cases, what is reported is theoretical, at least in the sense that they have not yet been demonstrated against any existing engines. (Based upon the "sky is falling" calls that I have received from reporters, I can guarantee you that little of this is understood.) While the theoretical target of their attack was the DES, this work tells us nothing new about the DES. One point of the report was that the DES was simply a chosen example of a broader class of algorithms that would yield to such an attack. It is not even aimed at the DES but rather at hardware implementations. Terry Ritter has suggested that it might be far less effective against software implementations. To characterize this work as "a new attack on DES" is inaccurate at best and seriously misleading at worst. William Hugh Murray, CISSP, New Canaan, Connecticut ------------------------------ Date: Sat, 26 Oct 96 02:24:30 EDT From: msb@sq.com Subject: Re: $850 Million Social Security Problem (Lucero, Risks-18.51) Scott Lucero writes: > In the Daily Brief, the *Los Angeles Times* reported that, according to > Social Security Administration officials, some 695,000 Social Security > recipients have been underpaid since 1972, due to a computer program error. Never having lived in the US, I have no familiarity with the detailed workings of Social Security. However, the fact that an error like this apparently went unnoticed for 22 years strongly suggests that either (1) nobody who was receiving these payments ever reported the under- payment, because none of them had any information about how the correct amount to be paid was calculated, or (2) the underpayments were reported by, and corrected for, a few cautious and diligent individuals, but nobody in the SSA ever looked for a programmatic reason why they had occurred. Either way the risks are obvious. How DID the error finally come to light, anyway? Mark Brader msb@sq.com SoftQuad Inc., Toronto ------------------------------ Date: Mon, 28 Oct 1996 09:36:52 -0600 From: "Brown, R Ken" Subject: Re: Franklin National Bank (Murray, RISKS-18.54) Surely it was the attempted secrecy that caused the banks's customers to lose their trust? What are they trying to hide? A press release along the lines of: "We regret to announce that the XXXXX of the YYYYY department has been suspended from duty pending the outcome of an internal inquiry into an unexplained loss loss of ZZZZZ dollars. On the positive side, the incident shows that the Bank's internal audit mechanisms are fundamentally sound and helped contain the loss to a figure that we can easily absorb. We have learned lessons which will enable us to further tighten up our procedures in future." Might have headed the whole thing off at the pass. I'm sure that the culture of secrecy that surrounds banks actually encourages their customers to distrust them. ------------------------------ Date: Sat, 26 Oct 96 02:08:58 EDT From: msb@sq.com (Mark Brader) Subject: Re: When is -32768 != -32767-1 ? (Giles, RISKS-18.49) Since there was no further followup after Bear Giles' item in RISKS-18.49, let me set this matter straight. Compiler quality is not involved; integer type sizes are. The expression -32768 or (-32768) is valid on any C implementation, and must have the value -32,768. However, its type is implementation-defined: it is either int or long. On some implementations, the type int contains 16-bit values, so 32,767 is the largest int value; in that case the type of the constant 32768 is long, so the expression -32768 must also be long. On all other implementations, ints are wider than 16 bits and the type of -32768 is int. In everyday contexts like short s = -32768; this distinction is immaterial. Either a long or an int on the right-hand side of the = sign will be successfully converted to a short with the value -32,768, if the type short can represent that value; if it can't (the range can be just -32,767 to 32,767), then of course neither one will work correctly. But there are a few specific situations where the type of -32768 does matter. Bear described one of them in his original posting: where the compiler issues a warning for every long-to-short conversion and then proceeds to treat a large number of warnings as an error. And the context mentioned above, the file , is another special case, because the C standard requires the macro SHRT_MIN to produce an expression whose type is int, not long. Hence on *some* implementations, SHRT_MIN must be defined as (-32767-1) or some other such workaround, not (-32768); but on other implementations, where ints are wider than 16 bits, the definition as (-32768) is as correct as the other. Since the file contains information specific to the implementation, it does not need to represent it in a portable manner. Hence Gnu C is within its rights to use (-32768) there, but, and here is the risk, it does not follow that a user can safely extract that expression from that system file and expect it to work anywhere. Mark Brader msb@sq.com SoftQuad Inc., Toronto ------------------------------ Date: Thu, 24 Oct 1996 18:48:44 -0700 From: Ian Brogden Subject: Wasted redundancy Some of the discussions on redundancy and backup systems reminds me of an automotive assembly plant where I once worked. The plant engineers and management were always very insistent on having redundant computer systems, even though a failure wasn't catastrophic for 90% of the cases. Typically there was 45 minutes to an hours worth of job instructions printed on paper. As a system implementor, I always argued that we spent as much time ensuring there was adequate hardware redundancy as we did removing faults from the software. More attention to fault removal would have eliminated enough of the reasons the plant wanted redundancy to make the redundancy overkill. When the primary system finally failed, it took at least as long to activate the backup as it would have to fix the primary, but because there WAS a backup, everyone worked on getting the backup up first. Of course, the key reason it took so long to get the backup going was that there was so much stuff running on it because no one wanted to "waste" that extra hardware. A Japanese customer of the same system took a different approach, and didn't order a backup. They felt it was much more important that the primary get fixed immediately if it broke, and could tolerate the few hours of downtime. Ian Brogden, Toronto i.brogden@ieee.ca ------------------------------ Date: 15 Aug 1996 (LAST-MODIFIED) From: RISKS-request@csl.sri.com Subject: Abridged info on RISKS (comp.risks) The RISKS Forum is a MODERATED digest. Its Usenet equivalent is comp.risks. => SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) if possible and convenient for you. Or use Bitnet LISTSERV. Alternatively, (via majordomo) DIRECT REQUESTS to with one-line, SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:] or INFO [for unabridged version of RISKS information] => The INFO file (submissions, default disclaimers, archive sites, .mil/.uk subscribers, copyright policy, PRIVACY digests, etc.) is also obtainable from http://www.CSL.sri.com/risksinfo.html ftp://www.CSL.sri.com/pub/risks.info The full info file will appear now and then in future issues. *** All contributors are assumed to have read the full info file for guidelines. *** => SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line. => ARCHIVES are available: ftp://ftp.sri.com/risks or ftp ftp.sri.comlogin anonymous[YourNetAddress]cd risks or http://catless.ncl.ac.uk/Risks/VL.IS.html [i.e., VoLume, ISsue]. The ftp.sri.com site risks directory also contains the most recent PostScript copy of PGN's comprehensive historical summary of one liners: get illustrative.PS ------------------------------ End of RISKS-FORUM Digest 18.55 ************************