I used all the content from previous crawls by my search engine Kennedy to construct a searchable archive! It's just like the Internet Archive's Wayback machine, but for Gemini. 2 million+ URLs/versions, going back to Jan 2022.
Example: gemini://kennedy.gemi.dev/archive/cached?url=gemini%3a%2f%2fdrewdevault.com%2f&t=637898933801779424&raw=False
You can enter and exact URL, or just search for part of a URL, like searching for a domain name
gemini://kennedy.gemi.dev/
1 month ago 路 馃憤 cobradile94, skyjake, jo, astroseneca, warpengineer, mozz, moddedbear, devyl
[2] gemini://kennedy.gemi.dev/
@acidus love it! searching without being afraid of everyone easily finding all my past mishaps! keep up the good work and thanks for running your services for all of us to use 路 4 weeks ago
@danrl sure is. The Gemini subset of Robots.txt talks specifically about this.
gemini://gemini.circumlunar.space/docs/companion/robots.gmi
Basically, set "archiver" exclusion rules in your robots.txt. This allows your capsule to be crawled and available for search engines, without being archived. Or just deny all crawlers. Whatever you feel most comfortable with.
I'll also add some more clear language and a way to contact me to opt out as well 路 1 month ago
@skyjack. fixed. Thanks 路 1 month ago
is there a way to opt out capsules,?like with the internet archive which used to follow robots.txt (but then stopped which i find quite audacious, but that鈥檚 a different story) 路 1 month ago
This is the archive _verision_ of
A small typo... 馃 路 1 month ago
That鈥檚 awesome! Gemini needs to be preserved the same way the Clearnet does! I really appreciate the work you鈥檙e doing here! 馃榿 路 1 month ago