💾 Archived View for rawtext.club › ~sloum › geminilist › 000924.gmi captured on 2020-09-24 at 02:14:31. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

<-- back to the mailing list

Status codes

jan6 at tilde.ninja jan6 at tilde.ninja

Wed May 20 22:13:34 BST 2020

- - - - - - - - - - - - - - - - - - - 

May 20, 2020 11:16 PM, "solderpunk" <solderpunk at sdf.org> wrote:

I do kind of worry that the time to propose changes to "core" stuff is
passed or passing. New implementations are being written at an
astonishing rate and with so many clients and servers out there, every
substantial change runs the risk of fracturing the nascent Geminispace
into incompatible subspaces. Stuff that is very poorly implemented,
like client certificiate stuff, doesn't have this risk so much, but
anything fundamental I worry is already more or less "set" now. It's
the downside to unexpected explosive growth

can't you simply set up different version revisions, though, for that?maybe just a "gemini-stable" and "gemini-next" branch, where gemini-next is explictly experimental and can change at any time,and ideas from there that work, can be cycled into gemini-stable at some kinda-set intervals?

Suggestion #5: A comment, really
5x codes are by design permanent errors but 51 (HTTP 404 equivalent) is
actually a temporary problem according to the spec.
In fact this is precisely what differentiates it from HTTP 410 GONE
(gemini 52). So there seems to be a design error here but I don't really
know what the correct solution is. Either 5x aren't really permanent
errors (how would they be called then?) or 51 shouldn't be a 5x error
to begin with.
It's true that "not found" is, in principle, temporary, or at least
non permanent, in the sense that, yes, maybe tomorrow or next month or
next year there will be something at that address.
The temporary/permanent error distinction in Gemini is intended mostly
to be useful for non-human user agents, like search engine crawlers or
feed aggregators or things like that, rather than people sitting in
front of something like Bombadillo or Castor. If a bot tries to fetch a
feed from a bad URL, it would be nice if it didn't continually try again
every hour on the hour thinking that it's only a temporary failure and
one day the feed will appear!

I think it'd be best if it returned an (optional?) timeout/expiry info, for the tag (if not optional, 0 or -1 can signify infinite time), not sure what time unit, probably just seconds, though "1h" for 1 hour, and such is also possiblethat way the server can specify if it should be flagged as missing for the next hour, or next day, or forevermore, etc, useful in cases that you temporarily take down a page for whatever reason, or where you might change urls every so often, or if you're simply prone to typos and sometimes fumble the links, and don't want to bother to manually ask for re-index... sometime you might not even know that a page is not crawled by some crawler...

also while probably not necessarily part of the spec, what should be the case if there's a redirect to a nonexistant URL?should the url that was redirected FROM, be permanently "not found" as well?

and if it's intended for non-human agents, maybe mention that, too? that human-controlled clients are allowed to re-request on demand and don't have to block it forever?

"client error" makes more sense than "permanent error" in this case, too