đŸ’Ÿ Archived View for nytpu.com â€ș gemlog â€ș 2023-06-04 captured on 2024-07-09 at 02:33:27. Gemini links have been rewritten to link to archived content

View Raw

More Information

âŹ…ïž Previous capture (2024-05-10)

-=-=-=-=-=-=-

↩ go back to index

Re: The Gemini Protocol Seen by This HTTP Client Person

June 4, 2023

Gemini isn't trying to “solve” the failures of the Web in the same way that PICO-8 isn't trying to solve the failures of AAA games

Prefatory Matters

First, the post by Daniel Stenberg, creator and primary maintainer/BDFL of the venerable cURL library & utility:

Daniel Stenberg: The Gemini Protocol Seen by This HTTP Client Person

Fediverse thread with my initial thoughts (and Tomasino's)

And the complete discussion on Gemini (to my knowledge, as of this writing):

Thrig: The HTTP client person as seen from Gemini

tjp: Mising [sic] The Point

Textmonger: A couple comments

JeanG3nie: Re: Missing the Point

CircaDian: On Perspective

Textmonger: My online experience would have been so much less without cURL

Alex Schroeder: Gemini and curl

One thing to note, Daniel Stenberg has explicitly stated several times that as long as the code is up to cURL's coding standards, he's more than willing to add Gemini support to cURL. I'm sure he also has more than his fair share of criticisms of HTTP and any other internet protocol cURL already supports too.

Also, in this post I'm mostly just going to discuss Daniel Stenberg's original post and not the responses to it. Shockingly, almost of the responses were measured, well-written, and acknowledged the real issues Daniel addressed, like I'm attempting to do here.

But I do feel the need to call out that Thrig's post in particular, it's what I was expecting the majority of on-Gemini responses to be: filled with hand-waving and various fallacies that smell of wanting to find any and every excuse to dismiss the entire article outright—not unlike what people on “““hacker””” ““news”” do any time an article that's positive about Gemini is posted, if I may dare to make the comparison.

Running Commentary

Gemini has no cookies, no negotiations, no authentication, no compression and basically no (other) headers either in a stated effort to prevent surveillance and tracking. It instead insists on using TLS client certificates (!) for keeping state between requests.

This sounds more like pleasant surprise and interest rather than a criticism. “Insists” often has a negative implication, but I feel it's more interest in that client certificates are used when they're rarely or never used with any other protocol.

Its [sic] quite similar to going back to GOPHER.


It seems to be true because Gemini documents certainly are never visually very attractive. Like GOPHER.


The Gemini protocol reeks of GOPHER and HTTP/0.9 vibes. Application protocol style anno mid 1990s with TLS on top. Designed to serve single small text documents from servers you have a relation to.

Sounds like he's rarely if ever seen a plaintext document rendered by anything other than a web browser or a code editor.

No one show him Lagrange!

Although if it didn't have the negative tone I'd say he's almost getting it :)

The protocol enforces closing the connection after every response, forcibly making connection reuse impossible. This is terrible for performance if you ever want to get more than one resource off a server.

Ties into the previous accidentally salient point, it is literally designed to serve small, standalone text documents. Not bulk downloads. Maybe reuse would be nice when navigating around a capsule, but at least for me, 80% of the time I visit a capsule once per session, being linked from an aggregator or my bookmarks.

Although I do agree that sending a cert and doing a full handshake every time is rather wasteful, considering the average size of Gemtext pages it's regularly a 50–100% overhead. I couldn't imagine how bad the overhead would be if Gemini used a CA system and you sent full certificate chains XD. **BUT**, note that this is “50% overhead” of a 10 KiB page (90% of Gemtext pages are ≀ that size[1]), so the *total* size of a Gemini connection is still 0.73% the size of just the *page content* of the average website.

My guess is that people will not serve “normal” HTML content over this protocol.

What a shocker!

Everyone knows gemtext is just nonsense and everyone is planning to serve HTML. No possible way that if you just wanted to serve HTML you'd just make a minimalist website, like many Gemini users already have.

There are no other methods or ways to send data to the server besides the query component of the URL. There are no POST or PUT equivalents. There is basically only a GET method. In fact, there is no method at all but it is implied to be “GET”.

Refer to:

Designed to serve single small text documents from servers
There is nothing written about how a client should deal with the existing query part in this situation. Like if you want to send a query and answer the prompt.

It's bad that this wasn't explicitly mentioned in the specification. The general strategy is if there's no query attached you send the prompt, and if there is a query attached assume it's a response to the prompt. Makes stuff that doesn't need state nice and stateless, and enables you to have a browser implicitly fill in a query string to automatically go right to search results, instead of having to follow a prompt and then input the search and be redirected.

Better use a short host name and a short path name to be able to send as much data as possible.

This is rather strange that the amount of “upload” is variable, but since the query string is not designed for bulk data transfer it's not a big issue. Unless you are doing some awful massive web search or something.

[1]: 90% of Gemtext pages are ≀ that size

Trust on First Use

This is the weakest part of Gemini. Daniel makes some very salient points about issues, particularly relating to sCaLaBiLiTy, but they're still relevant and some will show up at rather small scales that Gemini is already nearing.

Happily though, his conclusion ends up being very wrong:

I strongly suspect that many existing Gemini clients avoid this huge mess by simply not verifying the server certificates at all or by just storing the certificates temporarily in memory.

AFAICT, every single public Gemini client does do proper TOFU and stores the fingerprint of each server visited. The one singular benefit of TOFU is that you can just dump the cert and hostname/origin into a persistent hashmap really easily. But there are zero cases that I know of them using an explicitly intercompatible format to store them :(

Tomasino has written better than I can on potential improvements to the TOFU scheme in a backwards-compatible way, so I defer to his posts:

SSHFP and the TOFU Issue

DANE and TLS

Unfortunately, despite being a great idea, DANE probably isn't viable simply due to the lack of DNSSEC deployment and many domain registrars that also provide easy DNS hosting (such as Namecheap that I use) don't support TLSA records.

Running Commentary Cont.

Don't have much to say about proxying. I think the intent is that if you do have a proxy it's hosted by yourself either locally or on your own server, where you'd just have it terminate the TLS for you and handle TOFU and all that stuff. Not a big use-case for Gemini so I honestly think it's not worth the effort. Basically a mechanism for making a neat toy, or terminating TLS when using Gemini on retro computers.

The URI Problem

Thrig:

"Designed by committee" [sic][2] isn't exactly a ringing endorsement

One could argue that a single person designing something isn't exactly a ringing endorsement either :P Gemini's early history does definitely have a smell of one person designing it and not recognizing issues, and later history is design by committee but even worse because it was “design by an amorphous, anonymous, and ignorant committee”.

And wow, are both sides of that coin ever exemplified where the request header/URI format is specified. Both “oh yeah I and my close friends know the context and how to interpret this, I don't need to elaborate any more” and “100 people who don't know what they're talking about suggested this stuff” are rampant.

[2]: [sic] not indicating a typo but just indicating “thus was it written”; since the quotes are scare quotes and it is not from Daniel's original post

Existing Discussion

This section aggregates the existing criticisms which I cannot add much to, Elaboration details my potential solution.

First read the “URLs” section of Daniel's blog post, which is a bit too long for me to feel comfortable with quoting here (come on, just read the whole post before continuing rather than piecing it together from what I summarize here)

Discussion on the Fediverse:

@nytpu@tilde.zone (myself):

Hmm, and regarding the UTF-8 URLs, IIRC there was a big debate in the mailing list about whether or not clients should punycode IDNs, or transcode them to a different encoding, or just verbatim pass all URLs; and Solderpunk decided to just UTF-8 encode all URLs over the wire which is IMO not any better and possibly worse. Introduces lots of complexity: should clients normalize the Unicode (IME more complex and difficult than punycoding), and if so with what strategy? Should you still percent-encode the path or can you now leave that unencoded too? Lots of issues.

@bagder@mastodon.social (Daniel Stenberg):

yeah, it seems totally crazy to me and seriously under-documented what exactly it means

@tomasino@tilde.zone (Tomasino):

agreed. that was one of the most frustrating bikeshedding decisions. Nobody chiming in really understood the complexity of the URI spec or the implications of what they were proposing, and it went totally against the other goals of the project.
Using TLS, for instance, was chosen because library support was so common and accessible it lowered the barrier to server and client creation. Tweaking URL parsing made things harder without any tangible benefit.
I hope that gets scrapped, personally. And yes, carving the spec into 2 (protocol and document format) has been discussed already and piloted. It just hasn't made its way back to the official living document. That would help as well.
Where I think we'll probably see more pushback is on TOFU (I fully support DANE, but DNSSEC is still such a barrier), and chunking. There's a major philosophy of 1-request-per-document which seems antithetical to chunking.
Regardless, all 100% valid criticism and from one of the most reputable sources. I'm sure the whole community will be discussing this for weeks to come. Cheers!

Elaboration

When it was originally being discussed and in the past few days I've vacillated between several possible solutions, but both times I settled on the same one: clients should simply pass on URIs as they are given them.

Clients really should just not muck with URIs more than they need to. If it's parsable according to RFC 3986, they should parse it into components and then treat each of those components as opaque as possible. Don't punycode or transcode or do anything to the hostname, just pass it to the system's DNS resolver and let that encode what's needed, and then verbatim send the absolute URI to the server. If it looks punycoded then leave it, if it's percent-encoded then leave it—even if some control characters/high characters are percent-encoded and others aren't—but otherwise just send it as it was received.[3]

Whether you're clicking a link in a Gemini page a user copied and pasted a URI into their client, or clicked a link elsewhere and the system's URI dispatcher sent it to the client, the URI was deliberately encoded and written the way it was and should be preserved. Within the domain of the Gemini protocol there's no good reason to mutate or transcode stuff. All you're doing is sending it to the server followed by a CRLF, and some basic URL manipulation like making an absolute URI from a relative URI, nothing else.

Don't get mad about this *technically* breaking change! It turns out after looking at the source code, most clients already do exactly this. “Most” being the most popular clients: Lagrange, Kristall, Amfora, Bombadillo, and AV-98; the last being written by Solderpunk himself. Plus, when clicking a link with non-ASCII characters on a Gemtext page, the request will be remain UTF-8 and be unchanged from the current specification for 99.9996769% of Gemtext documents[4]

The Gemini spec does specify that clients should normalize it per § 6.2.3 which is not a bad idea since it makes them look nicer. § 6.2.3 normalization is *not* pathname normalization (§ 6.2.2) but rather “cleaning up” empty or redundant components of the URI, like removing an explicit port if it redundantly specifies the protocol's default port. Clients certainly shouldn't normalize pathnames as they might not be directory-style paths like § 6.2.2 assumes, and are instead random opaque strings that the server expects as-is. The server can apply that normalization as necessary if it does map paths to directories.

[3]: There are special cases where you need to manually punycode for DNS to properly resolve, that does not apply. This is primarily regarding to the “<url>\r\n” request sent directly to servers.

[4]: 99.9996769% of Gemtext documents

Running Commentary Cont. 2: Electric Boogaloo

The document is carelessly thinking “host name” is a good authority boundary to TLS client certificates
 It needs to restrict it to the host name plus port number. Not doing that opens up Gemini for rather bad security flaws

Yes. Very minor change and 100% worth it, I thought it was rather strange it wasn't included in the first place.

The text/gemini media type should simply be moved out the protocol spec and be put elsewhere. It documents content that may or may not be transferred over Gemini. Similarly, we don’t document HTML in the HTTP spec.

Even Solderpunk agreed with this and was/is planning on splitting up the spec. I too agree, even as an implementer I like to have separate documents depending on what I'm actually implementing at that point—as long as it doesn't get to the level of the MIME type specification where there's 15 RFCs to juggle, which Gemini shouldn't ever get to.


some people will insist that I have misunderstood parts or most of the protocol spec. I think that is entirely plausible and kind of my point: the spec is written in such an open-ended way that it will not avoid this. We basically cannot implement this protocol by only reading the spec.

I do agree even mandating that you read the FAQ—or god forbid, the mailing list archives—to clarify things is not good. If it's required to understand and implement the protocol, it should go in the specification(s), not a companion document that implementers don't know that they're supposed to read.

It is impossible to tell if this will fly for real or not. This is not a protocol designed for the masses to replace anything at high volumes. That is of course totally fine and it can still serve its community perfectly fine. There seems to be interest enough to keep the protocol and ecosystem alive for the moment at least. Possibly for a long time into the future as well.

Yes, getting the point! Considering that Gopher is thirty-two years old and has maintained a small but growing community; and Gemini has already many times surpassed the Gopher community and remains very active three years later, I think Gemini will do just fine even if the userbase does shrink substantially and there are a likely a medium number of untracked (by design!) “lurkers” that don't write or participate on Gemini extensively. Given the number of Astrobotany plants there are a lot of people visiting Geminispace at least once every few days.

What [Daniel Stenberg] Would [Personally Prefer to but is not Demanding Others to] Change

Clarified that heading for some confused people :P

1. Split the spec into three separate ones: protocol, URL syntax, media type. Expand the protocol parts with more exact syntax descriptions and examples to supplement the English.

I mostly agree. A whole separate document for URL syntax may be a bit much, but certainly text/gemini and the protocol should be split. And 100% to the more precise and formalized descriptions.

2. Clarify the client certificate use to be origin based, not host name.

Previously noted as a hard agree. Shouldn't be mandatory for backwards compatibility, but should be *strongly* encouraged for any and all new certificates.

3. Drop the TOFU idea, it makes for a too weak security story that does not scale and introduces massive complexities for clients.

I do not like the idea of dropping it, once again for backwards compatibility. Unlike the web, there aren't three web browsers, there are 50+ Gemini clients and dozens of maintained ones, and completely revising a previously core feature of the protocol is not fun for anyone. But if DANE became a de facto standard replacing TOFU in most cases, I would be 100% on board.

4. Clarify the UTF-8 encoding requirement for URLs. It is confusing and possibly bringing in a lot of complexity. Simplify?

See above.

5. Clarify how proxying is actually supposed to work in regards to TLS and secure connections. Maybe drop the proxy idea completely to keep the simplicity.

Yeah, I wouldn't be opposed. Or at least clarify that proxies should only be hosted locally (potentially even only on-LAN) and not be trusted for general browsing. I dunno if proxying was even deliberate in the first place or if specifying the full hostname was just in case your TLS library doesn't expose SNI information?

6. Consider a way to re-use connections, even if that means introducing some kind of “chunks” HTTP-style.

NAK, as above.

In Conclusion

I really appreciate Daniel Stenberg's post even though a small portion of it is rather missing the point of Gemini. It feels like he did try to at least get a surface-level of understanding despite trying to shove it into a modern web context, when people on “““hacker””” ““news”” do the same they don't bother levying any actually valid criticisms or even attempt to understand.

Although I will say that saying Gemini is inherently visually unappealing is a blatant, nearly-deliberate misunderstanding. Considering that one of the biggest fucking points of Gemini in the first place is that you can curate the experience however you personally want, whether that's in a monochrome 16x24 fixed-width terminal or rainbow words that dance across the screen one by one[5]. I do wish Daniel had read the FAQ beforehand because it does clarify the *intent* behind many decisions. Like that Gemini is almost exclusively intended to serve textual information like a blog, most anything else ending up being a toy or neat tech demo, not for general use.

I do feel he managed to concisely summarize several real issues with the Gemini spec, some which have been pointed out before and were/are being worked on, some that had been mostly settled until he brought them up again, but some interesting new issues as well. “Concisely”, unlike this post XD

[5]: I gotta make a client like that now


⁂

↩ go back to index

also available on the web

contact via email: alex [at] nytpu.com

or through anywhere else I'm at

backlinks

-- Copyright © 2023 nytpu - CC BY-SA 4.0