💾 Archived View for tilde.team › ~emilis › 2020 › 11 › 19-on-feeds-in-gemini-format.gmi captured on 2023-09-08 at 17:16:58. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2021-11-30)

-=-=-=-=-=-=-

On Feeds in Gemini Format

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

Update 2020-11-23:

There is now an official Gemini companion spec that we all should use:

Subscribing to Gemini pages

I will be updating my gmi-feed-aggregator to support this spec.

My original post follows:

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

I looked through the discussions on Gemini list, read the posts by ew0k and Drew DeVault.

I wholeheartedly disagree with the opinion that Atom/RSS (or JSON feeds) should be enough for everybody.

The point is - some of us are not thinking about running feed generators, parsers and aggregators on our laptops, workstations, modern servers we own, etc..

We are thinking about running these programs on computers where we have limited permissions, OpenWRT routers, experimental SBCs, old netbooks and rooted phones that cannot be updated to any recent distros due to missing driver blobs, etc..

In these situations even Python (widespread as it is) may not be available, may be too resource-hungry or may not have the option to be updated or extended with libraries.

What we need is the ability to process feeds with a bare minimum of tools (e.g. a POSIX shell, BusyBox, etc.). Parsing XML and JSON is not feasible in these situations.

Therefore we want a plain Gemini feed format. Seeing how easy it is to generate and parse Gemini files with just plain shell script, makes us want it badly. We also have hopes it would have more uses than just gemlogging.

What should we do about it

I think we should start by just building the tools for ourselves and sharing them (probably on the Gemini list). After we have a few implementations, we can discuss on the formal spec between the developers.

The main criteria should probably be the amount of effort and knowledge needed to implement a parser.

What I found in the discussions is that this may be the lowest common denominator at the moment:

=> URL ISO-TIMESTAMP TITLE-MAYBE-WITH-AUTHOR

We can start from this and agree that our parsers will rely on just these lines and ignore the rest for the moment. It could be done by this command:

grep -E '^=>\s*gemini://[^ ]+ [0-9]{4}-[0-9]{2}-[0-9]{2}(T[0-9]{2}:[0-9]{2}:[0-9]{2}(Z|\+[0-9]{1,2}:[0-9]{2}))?\s+.*



My own 2 scripts

I have some Bash code inside a Makefile that generates the pages and feeds for this Gemlog:

The Makefile

The resulting feed

I also wrote an aggregator that could parse such feeds and create a page from them:

https://tildegit.org/emilis/gmi-feed-aggregator

At the moment of writing this post, both of these scripts use the format proposed by ~ew0k with commas "," separating TIMESTAMP and AUTHOR:

~ew0k - Is This Aggregator Idea Good?

I will be dropping the commas promptly, because I don't think at least one of them is necessary: both URL and TIMESTAMP have no spaces in them, therefore spaces should be enough as separators around them.

I am looking forward to your implementations. Please share them!

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

If you want to reply to this post, email me your reply or a gemini:// link to emilis [at] emilis.net.

Back to ~emilis Home

Atom feed

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

SPDX-License-Identifier: CC-BY-NC-SA-4.0