Few Web services require as much computation per request as search engines. On average, a single query on Google reads hundreds of megabytes of data and consumes tens of billions of CPU (Central Processing Unit) cycles. Supporting a peak request stream of thousands of queries per second requires an infrasturcutre comparable in size to that of the largest supercomputer installations. Combining more than 15,000 commodity-class PC (Personal Computers)s with fault-tolerant software creates a solution that is more cost-effective than a comparable system built out of a smaller number of high-end servers.
Via the Google Weblog [1], “Web Search for a Planet: The Google Cluster Architecture [2]”
This is a good introduction to the Google Cluster, the 15,000 machines (as of the writing of this paper I'm sure) that make up the Google website and give it its incredible performance.
One of the ways they do this is have a series of clusters (of a few thousand machines) located around the world to handle queries more or less locally; I did a DNS (Domain Name Service) query from the Facility in the Middle of Nowhere for www.google.com and got 216.239.51.99 and a DNS query from a machine in Boston gave the result of 216.239.37.99. Some other interesting aspects: they forgoe hardware reliability in favor of software reliability, they don't use the fastest hardware available but the ones that give the best price/performance ratio, and lots of commodity hardware.
The paper doesn't go into deep technical details, but it does give a nice overview of how their system is set up.
[1] http://google.blogspace.com/archives/000925