Subject: RISKS DIGEST 11.21 REPLY-TO: risks@csl.sri.com RISKS-LIST: RISKS-FORUM Digest Wednesday 6 March 1991 Volume 11 : Issue 21 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator Contents: [Part of HUGE BACKLOG.] Medical image compression & ground for future litigation (David A. Honig) Flipped and misplaced bits (Jake Livni) Telco voice mail snafu (Gary McClelland) Are flaws needed for a standard to succeed? (David States) Carbon versus silicon, Minsky, etc. (George L Sicherman) A correction (Minsky, etc.) (Richard Schroeppel) Monopoly Security Policies for Thumb Prints (George W Dinolt) Re: But the computer person said it was OK! (Gord Deinstadt, Nick Andrew) Automatic patching revisited (Joe Morris) Trojan horses and password collectors...some retribution (Michael K. Gschwind via Joe Morris) Re: Red and green clocks (Mark Huth, Henry Spencer, Peter Monta, Dave Platt, Glen Ditchfield, Steven King) The RISKS Forum is moderated. Contributions should be relevant, sound, in good taste, objective, coherent, concise, and nonrepetitious. Diversity is welcome. CONTRIBUTIONS to RISKS@CSL.SRI.COM, with relevant, substantive "Subject:" line. Others ignored! REQUESTS to RISKS-Request@CSL.SRI.COM. FTP VOL i ISSUE j: ftp CRVAX.sri.comlogin anonymousAnyNonNullPW CD RISKS:GET RISKS-i.j (where i=1 to 11, j is always TWO digits. Vol i summaries in j=00; "dir risks-*.*" gives directory; "bye" logs out. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. Relevant contributions may appear in the RISKS section of regular issues of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise. ---------------------------------------------------------------------- Date: Tue, 05 Mar 91 16:07:01 -0800 From: "David A. Honig" Subject: Medical image compression & fertile ground for future litigation I just attended a conference (SPIE, San Jose) that included a workshop on image compression. One particularly RISKy area that received too little attention was the risk of sending medical images using lossy compression schemes. There was much discussion about how compression schemes could exploit the (finite) resolution of the human visual system to send only the info that was perceivable, but no one brought up the fact that what is deemed "detectable" depends upon the tasks that it is measured with. Furthermore, getting closer or farther from an image changes the spatial frequencies present! And of course what radiologists, cardiologists, etc. actually are doing when they're looking at medical images isn't understood, nor are all situations likely to be tested even if a battery of real-life images and experts are used to evaluate compression schemes. One person *did* bring up the fact that lossy compression might simply be avoided for medical (or other life-critical) images, but there was a lot of the "if you've got a hammer, everything looks like a nail" attitude there and people thought they had a pretty spiffy hammer. A comment was made about how the "corporate types" are highly resisstant to lossy compression of med images in a disgusted, "them damn management anchors" kind of way... I mentioned to someone that if you had to get your images to your specialist half a continent away that quickly, the expense of several additional phone lines and modems might be considerably cheaper than the cost of several lawyers.... but I thought this fairly obvious... ------------------------------ Date: Tue, 5 Mar 91 19:23:34 EST From: jake@mars.bony.com Subject: Flipped and misplaced bits The story of the CYBER clock going crazy after clock-register overflow reminded me of when I was a new CDC systems programmer a very long time ago and found that many earlier CYBERS didn't have memory parity checks and memory (and systems) failed not infrequently. My first horrified thought was: "You mean all those pipes in nuclear reactors and all those airplane wings were designed on these things and the bits just flip around by themselves sometimes?!" The answer, apparently, was "Yup". Later I found that FORTRAN code ran faster when compiled so as not to perform array-bounds checking. This RISKy behavior was considered a competitive feature. We even used to joke that we made more mistakes per second than any other CPU on the market. Preventive maintenance was supposed to avoid memory mishaps and software reasonableness checks and repeated simulations brought such errors to light before it was too late. I hope. Jake jake@bony1.bony.com ------------------------------ Date: Wed, 6 Mar 91 09:49:29 -0700 From: mcclella@news.Colorado.EDU (MC CLELLAND GARY) Subject: telco voice mail snafu Reply-To: mcclella@horton.Colorado.EDU Excerpt from [Boulder] Daily Camera, March 6, 1991 A glitch in a computerized telephone answering service provided by US West Communications Inc. of Denver has been plaguing Boulder users for at least a week....The optional service, known as Voice Messaging, has been available in Boulder since the fall of 1989 and operates from phone company headquarters. It enables customers to do without in-home electronic answering machines. [name of local user] said, "it's a nightmare, really. You get it because it's infallible and then this happens." [telco spokesperson] said a faulty device called a translator [??] had been replaced and a team of technicians is watching the system around the clock. Some users said the system had been failing to announce the presence of messages [with a "sutter" dial tone] and announcing messages when there really weren't any.... Billed as a foolproof answer to breakdown-prone tape recorder-based devices, the computerized Vocie Messaging service has more than 1000 subscribers.... [Nothing particularly remarkable about this story. What makes it amusing locally and very embarassing for the telco is that they are in the midst of an aggresive advertising campagin touting the reliability of the electronic system over tape-recorder systems. I should add that I was an early subscriber and that I remain generally happy with the service. At least *I* didn't have to take the *%$^ answering machine someplace to be fixed. System also has decent security features such as allowing the use of long PINS that can easily be changed by the user. But what are we to do about the poor folks who get a computerized service because it's "infallible?"] Gary McClelland, U. of Colorado mcclella@horton.colorado.edu ------------------------------ Date: Tue, 5 Mar 91 22:47:53 EST From: states@ncbi.nlm.nih.gov (David States) Subject: Are flaws needed for a standard to succeed? The discussion on "doing it right" vs. "doing it well enough" prompts me to observe that many, if not most, of the universally accepted standards in computing are clearly and seriously flawed. A few examples: RS-232 - cables that costs a small fortune, come in multiple incompatible flavors and require a degree in EE to understand 8088 - We who poke fun now would have been millionaires if we had had a better design back when it counted. MS-DOS - 640k?, who would ever want more than 640k of RAM on a PC? Unix - I don't know of a more obtuse collection of abbreviations. C - One can write intelligible C, it just wouldn't be as much fun. Fortran - The F word. No *REAL* computer scientists would ever use it, but what a standard. QWERTY keyboards - What can I say? Each of these is a widely accepted standard. Each has provoked uncoutably many obsenities, diatribes by the ream, and flame wars galore. Is it random that they are all so far from being the "right" solution? If a standard is really clean, elegant, and free of inconsistencies, it would be too easy. No one would ever have to really dig in and learn the guts of it, and as a result, no one would develop an emotional committment to using it. A successful standard must avoid building in insurmountable obstacles, but maybe a few surmountable ones are needed to be sure the user is paying attention. David States ------------------------------ Date: Tue, 5 Mar 91 10:02:04 EST From: gls@odyssey.att.com (George L Sicherman) Subject: carbon versus silicon, Minsky, etc. In Risks Digest 11.19, Klaus Brunnstein warns about Marvin Minsky's absolute identification of human intelligence with computer intelligence. I doubt that Marvin Minsky's disinterest in being human is worth worrying about. People who think for a living have always been especially prone to confuse thinking with living. So long as most of us know better, I am content to let thinkers inspire themselves with this error. We need more "mad scientists," not fewer. I am not impressed by Joseph Weizenbaum's abhorrence of human electrifi- cation. Our nervous systems are already plugged in! Every evening millions of people's optic and aural nerves are connected not to reality but to the artistic distortion of reality that appears on television. No physical connection between people and electronics is necessary to create the current disorientation and dehumanization that Weizenbaum dreads as an eventuality. G. L. Sicherman ------------------------------ Date: Tue, 5 Mar 91 13:42:59 PST From: r@fermat.UUCP (Richard Schroeppel) Subject: A correction (Minsky, etc.) In Risks 11.19, Klaus Brunnstein remarks > (I now understand why MIT students were forbidden philosophy in Marvin's time as dean, as Joe Weizenbaum remembers). I was at MIT from 1964-73, and from my limited perspective as a student, the MIT Philosophy department appeared to be alive and well. I spent several of these years at the AI Lab, and don't recall any discouragement of Philosophy or philosophy. We all spent time speculating about how the mind works. I also don't recall Minsky ever having any position that could be called "dean". Maybe some other time period? Rich Schroeppel rcs@la.tis.com ------------------------------ Date: Fri, 1 Mar 91 18:41:31 PST From: dinolt%wdl45@wdl1.wdl.loral.com (George W Dinolt) Subject: Monopoly Security Policies for Thumb Prints (Baldwin, RISKS-11.16) Bob Baldwin's comments on ``monopoly security policies'' only scratch the surface of security policy issues and the control of access to data. Even within the Federal Government (sometimes even within a single branch of government) there are many policies for handling the same classified information. The real issue is that the ``owners'' of information should be able to control it's dissemination. Who owns my ``thumb print?'' I would like to believe that I do, and hence I should be able to tell the state what it can and can't do with it. The problem is that once someone has a copy of my ``thumb print,'' I have no physical control of what they will do with it. A purpose of mandatory access controls in computer systems is to limit that ``copy'' capability. There is a similar problem with lots of other data about me that unfortunately exists in many computer systems around the country. Currently implemented mandatory and discretionary access controls do not provide the kinds of dissemination controls many of us think are needed to protect that data. (Currently implemented policies probably aren't adequate for the Federal Government either, but that is a story for another time.) Until (and unless) we articulate the policies about dissemination of personal information, it will be difficult to build systems which will meet those objectives. George Dinolt (dinolt@wdl1.wdl.loral.com) ------------------------------ Date: Fri, 1 Mar 1991 12:49:30 -0500 From: gd@geovision.UUCP (Gord Deinstadt) Subject: Re: But the computer person said it was OK! (RISKS-11.18) Dick Wexelblat (rlw@ida.org) writes: >Abbreviating a longer interchange: >Me: I only got one prescription, tear one [receipt] up. >Clerk: I can't >Me: Let me talk to pharmacist [...] >Manager: I'm too busy to worry about that now. Report it to your local College of Pharmacists. They will not be amused, since following the correct bookkeeping procedures is 90% of a pharmacist's job. Many Risks postings reflect situations where a technological breakdown is not addressed because the appropriate agency is not informed. It seems there is a gap between the average person's exposure to Risks and their knowledge of to whom they should complain. What the public needs is for those serving the public to take responsibility; bodies exist (and have existed for decades) to enforce the taking of responsibility; but rarely do the two come together. (And yes, sadly sometimes the regulatory bodies are merely there to whitewash, but I believe most of them are sincere.) Gord Deinstadt gdeinstadt@geovision.UUCP ------------------------------ Date: Tue, 5 Mar 91 23:51:17 EST From: nick@kralizec.fido.oz.au (Nick Andrew) Subject: Re: But the computer person said it was OK! In comp.risks you write: >I tried pointing out that that was unreasonable; he shrugged >and walked away. >Having a bit of time, I decided to tell the manager. It took a bit of >explaining to get my point across; one of the two individuals I was speaking to >didn't understand anything I was saying. After all, the computer had the price >as the sticker; what was the problem? G'day, BTW before you hit 'r', suggest you send your reply to nick@socs.uts.edu.au instead, as Kralizec is temporarily uncontactable from the Internet. Anyway what I wanted to say in response to your article in Risks is that it is a fairly common thing, so it seems, these days. I can't tell whether it happened much in the past, but it is certainly prevalent today. The popular term for somebody to whom you are trying to make such a point is "droid". I like the word, it emphasises the kind of robotic behaviour that you experienced. I made up the following definition. Droid, n: A person (esp. an employee of a business), exhibiting most of the following characteristics: * Naive trust or acceptance of something which is patently wrong; blind faith * An unwillingness to think "outside the 9 dots" * An "I don't make the rules, but I follow them" attitude * Unwilling to think "behind" the rules, or go "beyond" them * No desire to fix that which is broken. So I guess what you were dealing with there was a bunch of droids running a hardware store. Better find the humans before you start to complain. (Have you read the Dune books? Early in the first book Herbert makes the distinction between 'people' and 'humans'. A 'human' is not an animal. To wit, (this is Herbert's example) an animal, caught in a trap, would chew its own leg off to escape. A human in the same situation would lie there, feigning death, awaiting a chance to escape.) I think I'm qualified to recognise a droid, my previous girlfriend was one. Had no faculty for critical analysis whatsoever. She worked for Pizza Hut, and read some magazine article which claimed that Pizza Hut was going to franchise stamps selling from the Post Office (delivery drivers would sell the stamps and other postal things). Instantly believed something which sounded like a crock of shit, with no proof or credibility to back it up, and which appeared in a magazine around early April. Amazing stories, but that's enough reminiscing. I am entertaining the idea presently that "Society" or western civilization is geared towards droids. So many people at work behave like little cogs in the machine that there must be something appealing in that structure. I guess most of them don't behave the same way (or not to the same extent) outside work, but a certain amount of the droid mentality must always be there. Many of today's laws, and new ones are easy to point out, are there to keep droids under control. Some are to keep prude-droids happy, such as laws against nude bathing. But I find it hard to reconcile the "droid society" theory with the "cash society" theory, which asserts that the big cogs of the world turn in the pursuit of money, money and nothing else. In short, the pursuit of money explains the entire Gulf War, and nobody moves a muscle without the possibility of making money. Maybe the "cash society" theory on a small scale can be used to derive the "droid society", as it is mostly in the pursuit of money (i.e. at work) that droids appear. A colleague recently had a similar experience to yours. He tried to pay his phone bill (always a bad move) by phone. Telecom (i.e. the phone company) have two numbers: one for account enquiries and the other for "pay by phone". Clive spent an hour dialling the "pbp" number, which was always engaged. Then he rang the "enq" number and his call was placed in a queue (10 min). Anyway after 5 minutes of conversing with the droid, Clive learned the setup: * The people on enquiries can do enquiries but can't pay bills * The people on pay bills can pay bills but can't do enquiries * Enquiries has a queue handler answering service * Pay by Phone doesn't Of course it makes perfect sense to put an answering service on the PbP number as well, so that people can pay their bill eventually, but the droid didn't understand that, didn't know why there was no service, and had no inclination (or authority) to do anything about it. At that point Clive should have tried to go up through the supervisor ranks and got some answers, but he's a busy person and arguing with droids can really waste a human's day if they don't watch themselves. Anyway take comfort in the thought that you aren't alone & try to stay sane anyway... Nick. ------------------------------ Date: Sat, 02 Mar 91 10:56:20 EST From: Joe Morris Subject: Automatic patching revisited Several recent postings have discussed the security risks associated with communications applications which can be patched from a remote host without the knowledge of the local user. My reading of these postings is that their entire focus has been on the risks of sabotage; my concern is that another issue has been neglected: configuration control. If an application is changed without the knowledge of the user, then changes in its behavior may occur...which, of course, is probably the purpose of the (legitimate) download. If the user's use of the system is affected by this, and the user had no reason to expect the system to change, you've got a situation in which that user will probably spend significant worry-time trying to figure out what s/he did wrong *this* time. Lousy PR for the program. Similarly, users frequently adopt procedures which rely on not-in-specification characteristics of a system. I don't have much sympathy for users who get burned by this *if they were made aware that a change was to be made* (even if they weren't told that the change would affect whatever they were relying on), but it isn't fair or ethical to abruptly change something which the user has reason to treat as local and personal (such as a program in that user's own computer). Two rather generic examples: * An operating system invites a user logon with a line containing the words 'PLEASE ENTER LOGON:' and a user writes a script looking for this text. The next release of the system (announced far in advance) changes this to 'Please enter logon:' and the script breaks. Tough. * The user's communications program is being used to talk to two systems like the one above. The one which is changing to mixed case messages downloads a change to the script to make the text-match string mixed case. The comm program now fails to work with the host which hasn't upgraded its system to use mixed case. Unprofessional and unethical if the host-initiated modification was done without the explicit knowledge and informed consent of the user. I manage a complex with several flavors of systems (IBM, VAX, Sun). I've got to have records of when changes were made to each so if a system's behavior changes I can reliably determine if a change was made, which of my staff made the change and why. I don't think that it is unreasonable to expect an application to keep at least a bare bones audit trail on the user's system of when changes were made, what the nature of the changes was, and how the user authorized the installation of those changes at the time the change was downloaded. Joe Morris ------------------------------ Date: Sat, 02 Mar 91 11:11:09 EST From: Joe Morris Subject: Trojan horses and password collectors...some retribution There have been several discussions in RISKS-FORUM over the past years concerning trojan horse programs for host-connected displays, in which the trojan simulates the host OS and invites an unwary user to enter a valid userid and password. The following story, posted on the usenet group alt.folklore.computers, brings up the Gilbert & Sullivan line from _Mikado_, "To make the punishment fit the crime". Joe Morris ========== From: mike@vlsivie.tuwien.ac.at (Michael K. Gschwind) Newsgroups: alt.folklore.computers Subject: Re: Fake operating systems... Date: 2 Mar 91 14:23:07 GMT Organization: Vienna University of Technology In article <95125755@bfmny0.BFM.COM> tneff@bfmny0.BFM.COM (Tom Neff) writes: >Yeah, and the really big laugh is that while Joe User walks away >scratching his head at why his usual password doesn't work, the BASIC >program author is busily accumulating a fat file of real passwords! > >Fake shells and OS's are a cute novelty, but they do have their darker >side. Yes, they certainly have. A student at the Vienna University of Technology wrote on of those fake login programs which accumulated passwords. Unfortunately for him, the supervisors soon found out. HOWEVER, they did nothing to stop him immediately. They just copied his program. The next test had some questions about computer security. One question was: Explain the following program. Point out how it achieves its aim and what potential bugs this program has: According to the people who witnessed this, it was quite amusing to watch his reading the question, see him panic and leave ... ------------------------------ Date: 2 March 91 20:36:25 CST To: Mark Huth From: RISKS Forum Subject: Re: Red and green clocks (RISKS-11.20) [This header is EXACTLY as it was received. I need a Huth Who to figure it out. The message is certainly not "From: RISKS Forum"! It looks as if his mail sender crossed "Mark Huth" with "RISKS Forum". What do you get when you cross a sender with a receiver? Huth paste. What do you get when you cross a sender? Gibberish. Why did the sender cross the addresses? Because it confused the red and green clocks. PGN] Re: interference to clocks caused by signals superimposed on AC power lines, Paul Leyland asks >[ Can anyone explain why "red" clocks should be more susceptible to this form >of interference than "green" clocks? ] The older clocks with red LED displays just counted the cycles in the AC power (60 cycles per second in the US) whereas the newer clocks with green (electroluminescent) displays count the cycles generated by a quartz crystal oscillator isolated from the power line by a DC power supply. ------------------------------ Date: Sun, 3 Mar 1991 02:58:20 GMT From: henry@zoo.toronto.edu (Henry Spencer) Subject: Re: Red clocks run faster than green ones! I think the key words are "more sophisticated". That is, I would conjecture a correlation between choice of display and sophistication of the rest of the circuitry. A clock that keeps time based on the power frequency works by counting zero crossings of the AC waveform, and it matters how carefully this is done. The simplest method is to just count each time the voltage crosses zero. There is an inherent problem here: noise can turn one zero crossing into several, and if the circuit responds quickly enough, it will count them all. The potential for trouble increases greatly if some sort of signalling is also being done over the power lines, because one very popular way of doing such signalling is to send a short burst of data around the time of the zero crossing, when voltage is low and transmission is easy. This almost guarantees multiplication of zero crossings! A more sophisticated circuit will incorporate safeguards against noise that will also be effective against zero-crossing signalling. One way is to know the approximate frequency of the AC waveform and reject overly-frequent zero crossings as spurious. Another is to charge a storage element from the *peak* of the AC waveform, discharge it at a zero crossing, and count discharges, so that only the first zero crossing after a peak is counted. (I've built both types of circuit.) LED displays are cheap and easy to drive from digital circuits, so it would not be too surprising to find them in the cheapest clocks. The green displays are probably vacuum-fluorescent types, which are perceived (or at least advertised!) as better for some reason, but need more complicated and costly drive circuitry. So plausibly the green clocks are less cost-critical and get better counting circuitry as well. Henry Spencer at U of Toronto Zoology henry@zoo.toronto.edu utzoo!henry ------------------------------ Date: Sun, 3 Mar 91 15:09:55 EST From: monta@image.mit.edu (Peter Monta) Subject: Re: Red clocks run faster than green ones! > New street lighting systems in an East Midlands village have been playing > curious games with alarm clocks, causing them to race up to four hours ahead > while their owners slept. The electrical grid provides a kind of time/frequency service in addition to power---the frequency is accurately held to 60 Hz (or 50 Hz) over the long run, and simple clocks count AC mains cycles. The problem with electronic clocks is that, if they are poorly designed, they can have much less noise immunity than their mechanical counterparts. The clock on the classroom wall won't care about a 10 microsecond spike on the power line, but a "red" clock may see it as an extra cycle. Lots of spikes, and you have a clock running amok. The new streetlights were probably injecting noise onto the power lines. (Notice that the clock always runs fast---a noise source can't remove cycles, only add them.) The solution is either a crystal-controlled ("green"?) clock, which must be periodically trimmed to the correct time, or a noise-immune cycle detector. (The synchronous motor in the mechanical clock is such.) I have a 60 Hz source phase-locked to the mains frequency, which is one way to do this electronically, and it's quite interesting to watch the phase with respect to a stable, accurate frequency source. The grid can gain or lose up to a few seconds per day, only to slew slowly back to the correct time during the night to keep the clocks happy. Peter Monta, MIT Advanced Television Research Program monta@image.mit.edu ------------------------------ Date: Sun, 3 Mar 1991 12:32:22 PST From: dplatt@goblin.ntg (Dave Platt) Subject: Re: Red and green clocks > [ Can anyone explain why "red" clocks should be more susceptible to this form > of interference than "green" clocks? ] I'm going to guess that the street-light timers in question might be using some form of RF carrier for coordination and control. The timer might be using a technology similar to the X-10 over-the-power-wires system sold here in the U.S., in which short bursts of low-frequency RF are transmitted over the power lines. Or, it might simply be a matter of some electrical interference (noise) being emitted by the timers. The X-10 signals are transmitted only during the zero-crossing portion of the 60 Hz cycle. Noise can occur at any pointin the waveform, but certain forms are more likely at the zero-crossing point. Now... most inexpensive AC-powered digital clocks keep track of the time by monitoring zero-crossings in the AC cycle... the long-term accuracy of the AC is better than any inexpensive quartz-crystal oscillator, and it's easy to implement the zero-crossing detector using a Schmidt trigger circuit. My guess is that the signals (or other interference) transmitted by the street-light timers was of sufficiently high power to spoof the zero-crossing detectors in the inexpensive clocks... this would occur if the amplitude of the interfering signal exceeded the hysteresis curve of the Schmidt trigger. This would cause the clocks to advance more rapidly than they should. As to why the "green" (plasma-display) clocks are less sensitive to the interference... I believe that the plasma-display technology is inherently more expensive than LEDs, and thus is used only in more up-scale clocks. These more-expensive clocks would have a better chance of having been designed with a more reliable zero-crossing detector... e.g. one which runs the low-voltage AC signal through a low-pass filter designed to prevent interference from spoofing the zero-crossing detector. Dave Platt, New Technologies Group Inc. 2468 Embarcardero Way, Palo Alto CA 94303 (415) 813-8917 UUCP: ...apple!ntg!dplatt ------------------------------ Date: Mon, 4 Mar 91 10:00:59 EST From: Glen Ditchfield Subject: Re: Red clocks run faster than green ones! You probably wouldn't call it "more sophisticated", but I once owned an alarm clock with a green digital display that was basically mechanical. A light shone through a mask and a sheet of green plastic, and an electric motor drove clockwork that slid shutters across segments of the display. Transitions between some digits required a lot of sliding back and forth. ------------------------------ Date: 4 Mar 91 21:03:39 GMT From: king@motcid.UUCP (Steven King) Subject: Re: Red clocks run faster than green ones! I don't know if this is the cause for the Castle Donnington problem, but I've seen a similar effect here in the States. My parents were hosting an exchange student from the Netherlands one year. Naturally, the young lady brought her trusty alarm clock with her. She plugged it in one night, set the alarm, and went to sleep. My mom woke to hear MaryLou in the shower around 4am getting ready for her eight o'clock classes! Funny, MaryLou's clock read 6am... Turns out that the clock was pulling its timing information from the line current and, at 60 Hz instead of 50, was running 20% fast. Is it possible that the same thing happened in Castle Donnington when the new street light timers were installed? Could the timers have been feeding a frequency back into the power lines causing the clocks to run fast? Assuming that's the case, I'd have to guess that "red" clocks commonly key off of line frequency for timing whereas "green" clocks maintain their own independent timers. Steven King, Motorola Cellular (...uunet!motcid!king) ------------------------------ End of RISKS-FORUM Digest 11.21 ************************