Postel's Law

On Mon, Nov 30, 2020 at 3:28 AM Stephane Bortzmeyer <stephane at sources.org>
wrote:

Note that one of the reasons why the Web became so bloated is
> precisely Postel's law (the robustness principle). Browsers and
> servers violate the spec, these violations soon became
> de-facto-mandatory-to-support and one after the other, the "unofficial
> spec" became much larger than the real one.
>

Not that I was there at the time, but I firmly believe that Postel's law
has always meant "Don't exploit obscure features of the protocol when
sending; but do be prepared for them, as well as for outright errors, when
receiving", and so has nothing to do with trying to make sense of received
but invalid transmissions.  RFC 1122 (1989) section 1.1.2 agrees with this
interpretation:

         At every layer of the protocols, there is a general rule whose
         application can lead to enormous benefits in robustness and
         interoperability:

                "Be liberal in what you accept, and
                 conservative in what you send"

         Software should be written to deal with every conceivable
         error, no matter how unlikely; sooner or later a packet will
         come in with that particular combination of errors and
         attributes, and unless the software is prepared, chaos can
         ensue.  In general, it is best to assume that the network is
         filled with malevolent entities that will send in packets
         designed to have the worst possible effect.  This assumption
         will lead to suitable protective design, although the most
         serious problems in the Internet have been caused by
         unenvisaged mechanisms triggered by low-probability events;
         mere human malice would never have taken so devious a course!

         Adaptability to change must be designed into all levels of
         Internet [...] software.  As a simple example, consider a
         protocol specification that contains an enumeration of values
         for a particular header field -- e.g., a type field, a port
         number, or an error code; this enumeration must be assumed to
         be incomplete.  Thus, if a protocol specification defines four
         possible error codes, the software must not break when a fifth
         code shows up.  An undefined code might be logged [...],
         but it must not cause a failure.

         The second part of the principle is almost as important:
         software on other hosts may contain deficiencies that make it
         unwise to exploit legal but obscure protocol features.  It is
         unwise to stray far from the obvious and simple, lest untoward
         effects result elsewhere.  A corollary of this is "watch out
         for misbehaving hosts"; host software should be prepared, not
         just to survive other misbehaving hosts, but also to cooperate
         to limit the amount of disruption such hosts can cause to the
         shared communication facility.

I think that implementers have in effect misunderstood the phrase "deal
with every conceivable error" as if it meant "accept every conceivable
error and try to act on it", but the context makes it clear (to me, at
least) that what is meant is "not crash on any conceivable error".


John Cowan          http://vrici.lojban.org/~cowan        cowan at ccil.org
Almost all theorems are true, but almost all proofs have bugs.
        --Paul Pedersen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.orbitalfox.eu/archives/gemini/attachments/20201130/b8d1
5260/attachment.htm>

---

Previous in thread (16 of 27): 🗣️ Robert "khuxkm" Miles (khuxkm (a) tilde.team)

Next in thread (18 of 27): 🗣️ Waweic (waweic (a) activ.ism.rocks)

View entire thread.