💾 Archived View for soviet.circumlunar.space › pokes › thoughts-on-offline-first.gmi captured on 2023-07-10 at 13:42:13. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2021-12-03)

-=-=-=-=-=-=-

Thoughts and plans for offline first

2021-01-20

A while ago I read Solderpunk's post about moving toward "offline first":

Solderpunk: Progress toward "offline first"

It's since been milling around in my mind long enough that I had almost convinced myself that I had the idea! However, my needs are a little different. Since the beginning of the pandemic I've been working in a way that needs an always-on connection, meaning that I can't reserve a weekly period for internet-doing. I could do such a thing in my non-work life, but the focus of internet-doing there would I expect be what's identified as a gap in the post: retrieving and consuming other people's content.

What I think would be a valuable solution to this problem consists of two parts: One, a GUI tool for reading saved offline content; and two, a daemon that collects URLs from the GUI tool, detects when the Internet is available, and fetches content for offline reading that's made available for the GUI.

I haven't been able to plan here a ton, but here's the vaporware that I imagine.

The Viewer

First, the GUI tool. (And yes, while the Geminisphere is typically part of the text-only fandom, I like GUIs.) In my searching so far, I was surprised to not find any existing tool that does nothing more than render rich text, say HTML, that is not attached to an Internet fetcher. It seems like something that would fit nicely within the Unix philosophy, and down the road I may make something for this (while I don't actually have much GUI experience, PyQt plus Qt's native HTML rendering abilities are promising for doing this in a small amount of code.)

The Fetcher

As far as I know, nothing like the fetcher that I would want currently exists, but doesn't seem like it would be too troublesome, requiring a simple loop that shells out to wget, or curl, or a Gemini fetcher. In the case of HTML documents, I would probably want the fetcher to be smart enough to go one level deeper and get images, probably also relinking them to be local links. wget already has this built in.

The fetcher could also include automatic updating of RSS feeds or Gemini feeds.

The Format

The Viewer and the Fetcher need to coordinate both on Viewer to Fetcher communication and Fetcher to Viewer communication. V2F should be pretty easy, since not much needs to be sent to the Fetcher. It could be a protocol over sockets, or a simple filesystem protocol where the Viewer deposits URLs that it wants to see into a file or a directory.

Going the other way, F2V communication makes the most sense as a simultaneous view of an offline cache. This problem is remarkably similar to email.

In the meantime, email clients that are capable of reading mbox, Maildir, or MH-based mail are promising for this task. Claws mail with plugins for reading HTML mail works well and has a reasonable UI.

Future Work

Right now, this is all vaporware. I've begun work on the minimum viable fetcher, that for now is just Python that fetches Gemini content, formats it to HTML, and puts it in a Maildir. I've also been playing around with Emacs' gnus, which is new to me, but if I can learn it also looks promising as a content reader. Looking at the gnus documentation, there is less of an impedance mismatch between what I want and what it provides, compared to traditional email clients. I like that it already abstracts mail and news as two streams of information that come in in discrete units. Gemini and HTTP/HTML content can just be another stream!