Again on feeds in Gemini format
- π§ Messages: 37
- π£οΈ Authors: 22
- π
First Message: 2020-11-19 01:12
- π
Last Message: 2020-11-20 18:08
1. Emilis (emilis (a) emilis.net)
- π
Sent: 2020-11-19 01:12
- π§ Message 1 of 37
Hi,
I am developing a generator and a parser for the
frequently-discussed-but-never-agreed-on feeds in Gemini format.
I would like to share my code, see your similar code and later discuss
(over IRC?) what format could work best for our use cases.
My code:
Gemlog generator: gemini://tilde.team/~emilis/Makefile
Aggregator: https://tildegit.org/emilis/gmi-feed-aggregator
The motivation (taken from this post:
gemini://tilde.team/~emilis/2020/11/19-on-feeds-in-gemini-format.gmi ):
I looked through the discussions on Gemini list, read the posts by ~ew0k
and Drew DeVault.
I wholeheartedly disagree with the opinion that Atom/RSS (or JSON feeds)
should be enough for everybody.
The point is - some of us are not thinking about running feed
generators, parsers and aggregators on developer laptops, workstations,
modern servers we own, etc..
We are thinking about running these programs on computers where we have
limited permissions, OpenWRT routers, experimental SBCs, old netbooks
and rooted phones that cannot be updated to any recent distros, etc..
In these situations even Python (widespread as it is) may not be
available, may be too resource-hungry or may not have the option to be
updated or extended with libraries.
What we need is the ability to process feeds with a bare minimum of
tools (e.g. a POSIX shell, BusyBox, etc.). Parsing XML and JSON is not
feasible in these situations.
Therefore we want a plain Gemini feed format. Seeing how easy it is to
generate and parse Gemini files with just plain shell script, makes us
want it badly. We also have hopes it would have more uses than just
gemlogging.
## What should we do about it
I think we should start by just building the tools for ourselves and
sharing them (probably on the Gemini list). After we have a few
implementations, we can discuss on the formal spec between the developers.
The main criteria should probably be the amount of effort and knowledge
needed to implement a parser.
What I found in the discussions is that this may be the lowest common
denominator at the moment:
```
=> URL ISO-TIMESTAMP TITLE-MAYBE-WITH-AUTHOR
```
We can start from this and agree that our parsers will rely on just
these lines and ignore the rest for the moment. It could be done by this
command:
```
grep -E '^=>\s*gemini://[^ ]+
[0-9]{4}-[0-9]{2}-[0-9]{2}(T[0-9]{2}:[0-9]{2}:[0-9]{2}(Z|\+[0-9]{1,2}:[0-9]{2}))?\s+.*