💾 Archived View for tilde.pink › ~slondr › re-the-guppy-protocol.gmi captured on 2023-11-04 at 11:29:38. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-09-08)

-=-=-=-=-=-=-

Someone's proposed a new protocol, which is sort of a version of the Spartan protocol over UDP, with in-band support for splitting messages over packets.

The Guppy Protocol Specification v0.1

I love thinking about network protocol design, so proposals like this one always make my ears perk up!

My understanding, having implemented server software for both Gemini and Spartan, is that Spartan is mostly successful at being 90% faster than Gemini by altering about 10% of the spec. These alterations include the entire removal of TLS, and reducing the amount of data included in requests and responses.

It also includes adding a new text/gemini line type, which is unfortunate because it introduces strict incompatibility with exitisting Gemini software, though robust fallbacks are possible to implement server-side.

The Guppy proposal does away with the new line type, which is great, and even goes so far as to (implicitly) declare TCP to be bloat! That's the main source of novelty in this proposal, I think. The goal is to make the protocol easy to implement and interact with on very-low-powered devices, such as microcontrollers.

Here is my top-line gut reaction to this proposal:

This is an awesome idea and we should see how far we can get with these principles in mind.

UDP retransmission and order reconstruction is hard a problem to circumvent, and might not make Guppy as performant as it could be.

Some backstory: UDP cannot be trusted

For awhile before I set up Home Assistant, I had a whole custom room temperature reporting stack with custom C code running on microcontrollers sending UDP packets containing a CBOR data structure to a server (a converted 2008 macbook) running a custom Erlang listener program, which converted the CBOR to JSON and hit an HTTPS server with the data which logged it into Grafana. Every part of this was easy to set up and test in isolation, and the entire system was also easy to keep in my active memory, which aided in debugging. I spent a lot of time making sure that the microcontroller code was robust in the face of failure, the Erlang server never died even when given terrible inputs, etc.

Unfortunately none of this mattered. The microcontrollers were communicating to a local server on the local area network; they had exclusive access to the 2.4 GHz Wi-Fi band in my home, nothing else was using that network; the The Guppy Protocol SpecificationProtocol Specificationlaptop-server was connected via ethernet directly to the router broadcasting the Wi-Fi signal.

I still saw a packet drop rate of about 1 in 10. The microcontrollers were set to send a single packet of data every four minutes. Over about a month, in aggregate, 10% of received packets had no information received from that origin in the previous 7 minutes.

The real internet is going to be much, much, much worse than this.

But UDP is awesome!

HTTP/3 abandoned TCP because TCP is too generic to be optimal for certain usecases, such as complex multi-round handshaking. QUIC is kinda just the best way to do TLS.

But they only get away with that by implementing congestion and dropped packet replication in-band. This implementation is not dissimilar to TCP's, it's just able to be shuffled around the actor diagram of the overall application protocol which leads to real (and massive) performance gains.

UDP-based protocols which do not implement packet reissue in-band, and really robustly at that, cannot rely on reliable or in-order transmission of packets. For some things, this is fine; content streaming, for example, is an obvious domain where latency & throughput can be more important than making sure that 100% of send packets actually reach their destination.

Sending text documents to be read by a human isn't like that, though.

Guppy addresses ordering and retransmission concerns already

Guppy has an application-layer implementation of really basic order-assurance and dropped packet retransmission. That's awesome! The Guppy state diagram would look incredibly simple, too, which is fun.

Guppy servers send response data to clients in packets which cannot be larger than 512 bytes. Responses which don't fit in one such packet are chunked and subsequent packets are send in their own little response format which includes a header identifying it as a continuation packet. Clients are expected to send ack packets in response to each and every received packet, and servers are expected to re-send packets which aren't ack'd in time.

I would expect this specific detail to be Guppy's achilles heel. This means gemtext documents of significant size will require many synchronous back-and-forth round trips between the client and server to complete, which will probably slow down communication drastically. The very small 512-byte packet size limit will contribute to this.

This is yet another problem that those in the web sphere forged a solution to after decades: once you have a hyper-optimized application protocol, the actual packet round trips between client and server dominate your program's running time. Building a protocol around making all client-server interactions fully synchronous and far more numerous than would otherwise be necessary is going to slow it down *a lot*. It would not surprise me if Gemini, with all its TLS 1.2 bloat, is faster to serve documents over the internet in practice than Guppy would be, running on the same hardware.

Other concerns

UDP also has an EOT problem: receivers of transmissions do not, generally, know how to distinguish between a timed out session and the successful completion of data transmission. Guppy handles this, too, by requiring servers send a special "end of file" packet after all other packets have sent, even when the whole response would otherwise have fit in a single packet.

Moving forward: My suggestions

This is a pretty awesome proposal, and I'd be curious to write a toy implementation of it just to play around with, to see if some of my assumptions about its behavior would hold!

The way I see it, there are two ways to modify this protocol to solve the round-trip synchronicity problem:

Idea 1: More Minimalism

The idea: Scrap all of the continuation ideas, permit packets to reach 65,000 bytes in size, and prohibit requests or responses from exceeding this limit.

If all data served using the protocol fits in a single packet, continuation becomes the problem of *authors* rather than *software implementors*, which is probably a bad idea in general but would give us some very interesting properties. This version of "Minimal Guppy" might be one of the easiest network protocols to implement even on extremely low-power or single-use devices, and it would be blazing fast in comparison to multi-packet protocols in general.

You would still want an ack/retransmission system for the single packet servers issued.

Idea 2: More Complexity

Alternatively, if serving content of arbitrary length is still desirable, maybe baking in an ordering to the application data headers and permitting packets to be sent out-of-order to be reconstructed in the client.

It seems like it would be generally true that servers would know the total ordering of content they send.

Idea 2.1: C strings, or Pascal strings?

If the protocol included both the enumeration of the current packet *and* the total number of packets to be sent, an end-of-transmission packet would not be necessary.

This is functionally identical to the difference between C strings, which are blobs of bytes followed by a NUL, and Pascal strings, which is a number indicating length followed by a blob of bytes exactly equal to that length.

It is not generally the case that servers always know the length of content they are going to send, sadly, because Pascal packets have some really nice properties, such as avoiding the edge case of the server's end-of-transmission packet getting repeatedly dropped, degrading the user experience.

So here's a strange idea: what if the protocol permitted either an end-of-transmission packet, or an optional total-number-of-transmission-packets indicator in continuation packets? And, better yet, make it allowable that servers begin to include the total number of packet indicator at an arbitrary continuation packet, rather than it being *required* to know before the first success packet. Or, even better yet, make the whole thing polymorphic by replacing the end-of-transmission packet with a packet that indicates the current packet number is equal to the total number of packets and no other data!

Needless to say, there's a lot of room to play around with ideas like these.