💾 Archived View for gemini.susa.net › gen › The_decommoditization_of_protocols.gmi captured on 2022-03-01 at 15:19:14. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2021-11-30)

-=-=-=-=-=-=-

The decommoditization of protocols

One of the most interesting things about Microsoft's Halloween_Memo[1] is the concept of "de-commoditizing" protocols. This short essay attempts to explain what this means, and what its effects on free software are. I argue that decommoditized protocols are a very effective weapon against free software in the short term, but in the long term will help free software become more fulfilling to users.

I use the term "protocol" in a rather inclusive sense, encompassing APIs, file formats, etc., not just the narrow sense of networking protocols - in short, anything that software modules need to be able to work with each other.

Background: free software and proprietary software

Decommoditized protocols are a marker for some fundamental differences between free software and proprietary software, and the philosophy that goes into them.

In an ideal world, software would be created to fulfill user needs, and would be designed to be maximally usable. However, real software gets created for somewhat different aims, sometimes in line with these user needs, sometimes less so.

Proprietary software is created to make money for its authors. To a first approximation, the best way to make money is to create a highly usable product, so that users will be willing to pay for it. However, at least two other factors come into play:

* Barriers to competition

* Network effects

For proprietary software to be profitable, it must create a proprietary advantage. Thus, simple software that just gets the job done has one serious disadvantage: a competitor can duplicate it. Given the extremely low marginal cost of software, this inevitably drives the price to near-zero. Thus, all truly profitable software has built-in barriers to competition.

And the most effective way of creating those barriers is to exploit network effects. Software doesn't exist in isolation - it's constantly interworking with other modules, loading and storing files in file formats, calling APIs, and communicating over the network. In most cases the usefulness of the software depends more on how well it interworks with other stuff than its own intrinsic merits. For example, even if a new word processor comes along that's better than Word, many people would be reluctant to switch because there are so many existing documents in Word format.

Free software, by contrast, is written for lots of different reasons, including a simple desire for the software on the part of the author, education, and being part of the free software community. However, getting the job done expediently is almost always an overriding concern. Thus, free software tends not to be much more complex than necessary, and making use of existing modules and protocols is often more appealing than reinventing things from scratch.

How to decommoditize a protocol

There are six things you can do:

* Make it more complex

* Incompletely specify it

* Fail to document it

* Change it rapidly

* Use encumbered intellectual property

* Add value (i.e. solve more problems, better performance)

Of these, only the last really makes the software more useful to users.

Examples of commodity protocols

The very best commodity protocols solve hard problems, but make it look simple. Two examples stand out: TCP/IP and HTTP/1.0. Neither of these protocols is perfect. However, their commodity nature greatly helped them take a foothold.

TCP/IP

TCP/IP is the foundation of the Internet. The protocol dates back to the early days of the ARPANet, and has existed in its present form since September 1981 (the date of RFC_791[2] and RFC_793[3]). This protocol violates all of the first five principles of de-commoditization.

* It is simple. Together, the two RFCs span 130 simply formatted pages, appendices and all. This is nothing short of astonishing, considering how difficult a problem internetworking is considered to be.

* It is completely specified. IETF protocols in general are well known for specifying "bits on the wire", and these protocols exemplify IETF practice. There are no complicated options or variants. As a consequence, TCP/IP implementations tend to work together very well. (actually, you need to add a link layer to get a complete TCP/IP implementation. However, RFC_1055[4] describes such a link layer (SLIP) in six pages.

* It is well documented. The RFCs are a model of clarity, thanks in large part to Jon_Postel[5].

* It is stable and mature. The protocol has been in use since 1981, and has scaled by many orders of magnitude. Old implementations still work on the modern Internet.

* It is unencumbered. No patents, copyrights, nor trademarks are infringed by a working TCP/IP implementation.

To say that TCP/IP has been enormously successful would be an understatement.

HTTP/1.0

Another example of a commodity protocol is version 1.0 of HTTP. Like TCP/IP, it solves a hard problem (people had been trying to implement global hypertexts for at least three decades before the Web hit), but is very simple. Indeed, a working HTTP/1.0[6] server is a weekend hack, and doing a simpleminded client is pretty easy too.

After the success of HTTP/1.0, however, the pressure to make the standard more complex became too great to resist. As a consequence, the HTTP/1.1_spec[7] is about 167 pages long, and is still in the process of revision at the time of this writing. A lot of what's in HTTP/1.1 is good stuff (like pipelining), but a lot is also needless complexity.

Examples of de-commoditized protocols

The Microsoft Win32 API

Perhaps the most classic example of a decommoditized protocol is the set of API's, DLL's, and other stuff comprising the Microsoft Win32 environment. This protocol is extraordinarily complex, incompletely specified (indeed, there are numerous inconsistencies between Microsoft's own implementations), poorly documented, and subject to rapid change.

As a consequence, the Wine[8] project (an attempt to implement the Win32 API within Linux) has found it very rough going. But they will get there.

RealAudio

Real_Networks[9] is a classic example of a company that was able to leverage a proprietary protocol into a successful business. Real has been upgrading the protocol continuously, improving quality and compression. They've used patents[10], undocumented protocols, rapid change, and added value to protect their product. It's also a classic example of the network effect - the more market share the clients have, the more motivation there is to provide content in RealAudio form, and vice versa.

How the IETF resists de-commoditization

IETF_process[11] actively resists de-commoditization in a number of ways. Most importantly, they require two or more interoperable implementations. This requirement puts a lot of pressure on the proposed standard to be both simple and completely specified.

The process resists de-commoditization in a number of other important ways, as well. It encourages the use of unencumbered technology when an unencumbered alternative exists. The entire process is conducted entirely in the open, with free availability of all documents. And perhaps most importantly, there is a strong tradition of standardizing technically excellent commodity protocols.

UnixTM[12] in the '80s: a case study of how de-commoditization can kill you

Back in the '80s, many technically oriented people were hoping that Unix would catch on in the PC marketplace. It was widely recognized as a powerful and mature system, with many important features such as networking, multitasking, and protected memory (the last of which is still not completely implemented in MacOS and Win9x). Of course, it required a lot more resources to run than the "toy" PC's of the day were capable of, but it was also clear that PC's were getting more powerful by the month.

None of this happened in the '80s, though. I argue that de-commoditization was a major culprit. All of the Unix vendors wished for their brand of Unix to have a proprietary edge over the others. Thus, it was to their advantage to add "features" that didn't exist on other Unices. The goal was to lock developers into one vendor's brand. If they were successful, the the resulting software just wouldn't work well on other systems.

Unix vendors did ok during this time, but were never able to compete effectively against PC operating systems. It was not until the advent of Linux that Unix really started taking off in the PC world. I believe that the success of Linux is due in large part to its wholeheartedly embracing the essential commodity nature of Unix. Indeed, Linux has fairly few features that were not present in Unices of the '80s. The appeal of Linux is that it implements these features extraordinarily well, and with an eye towards compatibility.

Complexity

The design of software is a constant struggle against complexity. On the one hand, the world is complex, and many difficult problems inherently require complex solutions. On the other hand, it's quite easy to add gratuitious complexity. The key difference is how much of the problem the complex software solves (i.e. how much complexity is exported to the other side of the protocol).

One example of this sometimes subtle distinction is the comparison of GX_vs._OpenType_layout[13] by Dave Opstad. Dave points out that for an application to support, say, Tibetan using OpenType, it still has to do a lot of the work. In the GX model, it's done for you in the operating system. Thus, even though OpenType and GX are roughly comparable in complexity, from the point of view of the application GX is "simpler". Thus, overall complexity needs to be weighed against how much of the problem is solved.

For a proprietary software organization that has just implemented a complex piece of software, it is tempting to assign a zero cost to the complexity. However, this would be quite wrong. Aside from the simple issue of higher maintenance costs, having more complex protocols makes the software far less agile, i.e. it is much more difficult to adapt it to changing market conditions.

Conclusion

In spite of the strong financial incentives and remarkable market successes of de-commoditized protocols, I believe that the future lies with commodity protocols. Users are becoming frustrated with the complexity, lack of consistency, poor documentation, and lack of choice that de-commoditized systems suffer from. Conversely, systems based entirely on commodity protocols have had their own share of remarkable successes.

The fact that commodity protocols are simpler and more completely specified than proprietary ones gives free software developers an edge. Given the disadvantages that free software developers work under (lack of funding being the most obvious), this edge is critically important if free software is to realize its current promise as a viable alternative to proprietary software as a tool for ordinary people to get their work done.

Thanks to the Gimp developers on #gimp for feedback on earlier drafts.

Link: 1. Halloween_Memo

Link: 2. RFC_791

Link: 3. RFC_793

Link: 4. RFC_1055

Link: 5. Jon_Postel

Link: 6. HTTP/1.0

Link: 7. HTTP/1.1_spec

Link: 8.

Link: 9. Real_Networks

Link: 10. patents

Link: 11. IETF_process

Link: 12. UnixTM

Link: 13. GX_vs._OpenType_layout

Generated from source URL:

https://www.levien.com/free/decommoditizing.html