💾 Archived View for rawtext.club › ~sloum › geminilist › 002381.gmi captured on 2020-09-24 at 03:09:34. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2020-09-24)
-=-=-=-=-=-=-
Ecmel Berk Canlıer me at ecmelberk.com
Mon Aug 10 23:14:04 BST 2020
- - - - - - - - - - - - - - - - - - -
For people like me who often read something in Gemini/Gopherspace,
then want to reference a few days later but cannot remember where
they read it, a proxy which maintained full-text search of everything
visited in the past month or so would be *super* handy, but I have no
idea how to build such a thing.
I assume downloading every page into a local cache and runningsomething like https://blevesearch.com/ on top of them when asearch request comes would work reasonably well.
I assume this wouldn't require _that much_ space, as most Gemini pages(that I have encountered so far) are small, and text is reallycompressible if the need comes up.
Changes to the cached pages could be stored with deltas/patches, sothat might be interesting for historical preservation too, assumingmost pages don't change that much, that often.
And all this can happen without leaving Gemini, like how GUS works.
I'm kind of attracted to the idea of small, simple, do-one-thing-well
proxies which can be chained together like "filter" programs in a
pipeline...but I guess the TLS overhead would stack up quickly,
Since I am not exactly knowledgeable on TLS and other low-levelprotocols (except knowing how to open a socket), how much would the TLSoverhead be if the proxy was hosted on the same machine as the client?I assume modern CPUs can easily deal with the encryption work atreasonable speeds, and the connection being local would probably get ridof the majority of the network overhead.
Also another idea to throw into the pile: a "master" proxy that acceptsTLS connections, but delegates everything else into small filterscripts. That way we can get rid of the TLS and networking overhead ofstacking multiple proxies on top of each other, while keeping theflexibility and simplicity of the pipeline approach. The one drawback Ican think of is the scripts would no longer be regular proxies on theirown, and if someone wanted to use even just one of these scripts, theywould need to install the entire pipeline framework.
But at that point, why not build the pipelines into clients? Assumingthese scripts work via "stdin -
(script magic) -
stdout", even shellclients could run them by piping their "gemini receive" command intothem. Long-running scripts could probably just be accessed via wrapperswith netcat, or something else I haven't thought of yet.
Anyway, these are all just some ideas that I am throwing here. I mighteven try building some of them if there's any interest, but I am prettysure there will be some issues I haven't thought of. Let me know whatyou all think!
-- Have a nice (day|night|week(end)?)~ Ecmel B. Canlıer ~