https://www.reddit.com/r/geminiprotocol/comments/1fzvyu0/distributed_p2p_dht_gemini_protocol/
created by YoursTrulyKindly on 09/10/2024 at 16:45 UTC
5 upvotes, 1 top-level comments (showing 1)
I just discovered gemini and it looks brilliant! Currently trying out lagrange.
Is there any thought about something like federalist[1]? Like an alternative gemini protocol that allows decentralized, uncensorable, unblockable and distributed high performance web pages? It uses Handshake for decentralized DNS and mutable torrents (BEP 46).
1: https://github.com/publiusfederalist/federalist
A gemini client could run a torrent node, and instead of a cache and bookmarks, stores and serves all visited pages via the torrent protocol (or IPFS). Ideally compressed in an archive similar to precomp[2] and with de-duplication and transcode jpg to jxl. And every gemini page url could be mapped to something like a magnet link (hash of url+date?) that can be retrieved via the DHT if the original host is down or takes too long. Clients could also automatically search and cache rare pages to automatically create a resilient distributed archive.
2: https://github.com/schnaader/precomp-cpp
Or you could host your own web pages via DHT alone, since hosting your own domain and web pages is costly, complicated, dependent on hosters and not anonymous for the average user. Just click "create server", give it a name that is automatically appended with a unique number and share a link e.g. `gemini://CookingForNerds738/blogPost42.gmi`. Not sure how decentralized DNS works, handshake seems a bit too heavy and focused on creating a hierarchy instead of just typeable unique website names.
--------------------------------------------------------------------------------
Ultimately what I'd like is a lightweight browser that can clean up news articles and archive web pages, sites and information permanently. Currently I use bookmarks, firefox reader view, make .md files with links, zotero for PDFs and also save web pages with SingleFile. It's cluttered and there doesn't seem to be any integrated solution or theory about improving this.
NewsWaffle is great for the first part. But it could also allow to parse any html link to a newspaper (e.g. redirect a reddit post), cache it and allow others to just download the already parsed and archived web pages from other clients via DHT. Hmm, maybe all this could be implemented in a proxy server.
Ideally I imagine a more elaborate markup though, like formatting for bold, italics, inline urls and tables. And ideally ebook like formatting with paragraph indent and align left, right, justify and center.
Comment by fembro621 at 09/10/2024 at 19:11 UTC
2 upvotes, 1 direct replies
Sounds like a sick idea.