I've been wanting to write this for a few months now (and the orginal intent was to write it for the other site [1] but I'm not sure if Smirk would agree with me) but since Jason Kottke [2] linked to Craig “Craigslist.org” Newmark's article [3] on the subject, I figured it was time.
Net neutrality.
Should the network be neutral in the traffic it transports, or can it prioritize the data depending upon content, destination, or both? It's an argument with proponents on both sides of the issue and it pretty much breaks down into individuals and small companies wanting neutrality, while the large multinational corporations want to shape the traffic.
Me?
I personally would like to see a neutral Internet, but I also see no problem with companies doing what they want to network traffic they carry. What I don't want to see is government regulation of what can and can't be done on the Internet. And generally, I feel the whole movement is moot anyway, because the net will always be neutral.
How?
Economic, my dear Watson.
Data is transfered across the Internet using the Internet Protocol, IP (Internet Protocol). To transfer a file, it's broken up into small bundles of data called packets, wrapped with a source address (where it's coming from) and a destination address (where it's going to), and pumped out through some piece of hardware (an Ethernet card, USB (Universal Serial Bus), serial line, T-1, OC-3, heck, IP packets can even be transported using avian carriers [8]) to another device that's closer to the destination [9]. That device then takes the packet, looks at the destination address, and pumps the packet out some hardware port to another device that's closer, and the process repeats until the destination is reached, on average, in less than 20 such hops [1] [10].
Now, the closer to the “core” you get, the more packets a router (a dedicated device to sling packets around) has to handle. To this end, most routers have hardware dedicated to slinging the packets around, and since the destination address is at a fixed point on the IP packet, it's easy (relatively speaking) to make dedicated hardware for this function. And it's important this be fast, because a core router may be handing 50,000 packets a second (the router The Company's traffic flows through handles about 4,000 packets a second) and comparing each destination address against a routing table with perhaps 200,000 possible destinations. Assuming a binary search on the routing table, that's 17 comparisons on average per packet (18 max), times 50,000 packets per second, or 850,000 comparisons per second, or about one comparison per µsecond [2] [11].
But the second other criteria are added to the routing entries, fewer packets can be processed on a given router. Say you want certain source addresses to be routed over faster links. Now, not only do you have to scan the destination address, but the source address as well. But the dedicated hardware in routers are only there for scanning destination addresses—scanning other parts of the packet require the CPU (Central Processing Unit) in the router to handle the packet.
Ouch.
And scanning deeper into the IP packet, say, for a TCP (Transmission Control Protocol) port (like TCP port 80, which is the default port HTTP (HyperText Transport Protocol) runs over) requires some additional processing to locate the TCP header, since the IP header is variable in length.
Ouch.
And then there's the administrative overhead.
Oh yes, maintaining a list of “fast addresses” (say, to partnering networks) isn't free. There's overhead in maintaining the list. There's overhead in distributing the list to all the routers across a network. There's overhead in troubleshooting: “Why did the packets go there?”—a few hours later—“So, which is more important, streaming video from Viacom [12] to AT&T [13] or VoIP (Voice over Internet Protocol) from AT&T to Verizon [14]?”—a few days later—“But if we do that, that impacts the data from Sprint [15], who just signed a major deal with us.”
Ouch.
And my example there assumes that AT&T hasn't already bought Verizon. Heck, in these days of multinational mergers it's hard to know who owns what [16]; just the politics of running a non-neutral network is mind-boggling to me, never mind the technical aspects of it.
A simple network is cheaper to run. A simple network is faster too.
Cheaper to run. Faster. Higher profit margin. I would think this would be a no brainer.
Oh, and did I mention that the current version of IP, which was designed over thirty years ago, has a field for “Quality of Service” which is rarely, if ever, actually used? Why? A few reasons:
It's been found over and over again that it's cheaper and easier to make faster routers than to implement quality of service across the network.
So I'm of the opinion—let the big boys waste their money on smart networks. Let them learn the hard way that a stupid network [17] is, in the long term, better.
And that's why I'm not really concerned about network neutrality. The market will sort it out.
Added a link to Wikipedia [21], and a small addendum to this entry [22].
[1] http://www.saltminechronicles.com/
[2] http://www.kottke.org/remainder/07/03/13115.html
[3] http://www.cnn.com/2006/US/06/09/newmark.internet/index.html
[4] http://en.wikipedia.org/wiki/Net_neutrality
[6] http://www.savetheinternet.com/
[7] http://www.wearetheweb.org/videos.html
[8] http://www.blug.linux.no/rfc1149/
[9] http://en.wikipedia.org/wiki/Small_world_experiment#Basic_Procedure
[16] http://www.eccentricflower.com/cgi-bin/lucien?tuned/1997&seq=1026
[17] http://www.isen.com/stupid.html
[18] http://cf23.clusty.com/search?query=average%20number%20of%20hops%20on%20the%20internet&