Caching and sizes, the explosion of responise codes (was Re: Caching and status codes)

> This is what I *think* how *should* clients work with caching:
> 
> The clients with history support and supports going ''backwards'' and
> ''forwards'' through history should cache text/* responses in memory
> for that browsing session. When the user browses through the history
> using ''forward'' and ''backward'' action, no reloading should happen.
> But, when a user clicks the link for a resource already in cache or
> writes the link by hand or selects the link from previously visited
> links or asks for reload: the cache is purged and the resource
> reloaded. It is assumed that requests with query part are idempotent.
> Now, when a page is dynamic, it should be stated as such so that the
> user would reload that page.
> With that, no new response codes.

I think your proposal is excellent. I am sure a browser
could add a command "press r to reload" for any page.

I have to say I am really puzzled by the protracted
discussion about caching. The difference between returning
the full document and a message saying "document hasn't
changed" is really small in the greater scheme of things:

The tcp 3 way handshake plus the tls negotiation
consume quite a number of packets, and add round trip
latency. Network load is often measured in number of frames
rather than bytes, and there is space for 1500 or even
9000 bytes per frame - this means that if your document is
below that size (I'd venture most good ones are), then a
"not changed" response doesn't even change the number of
packets sent. That even suggests another heuristic: If
your content is dynamic, try generating a short document,
the implication being that larger ones should be cached,
as there might be an actual benefit.

I really think this is a holdover people still thinking
in http+html instead of gemini:

Plaintext http allowed several parties to share
a cache. This isn't the case there, as things are
encrypted. Html often includes other urls ("img src"),
which might be shared across pages. Gemini doesn't do
that either.

And *if* caching should be done, then it seems
a poor idea to have the caching clues live in the
transfer/session/transport layer. Instead it should be in
the document markup. 

Even http+html finally realised that with the messy
"http-meta-equiv" fields in the markup. At least that
provides a better path for the document author to
tell is how long the document might be valid for. And
with a machine readable license that would allow for
aggregation/replication/archiving/broadcast/etc which seems a
much better way to save bandwidth and have persistent documents.

TLDR: Don't think in http+html, do better

regards

marc

---

Previous in thread (46 of 55): 🗣️ bie (bie (a) 202x.moe)

Next in thread (48 of 55): 🗣️ Philip Linde (linde.philip (a) gmail.com)

View entire thread.