💾 Archived View for marginalia.nu › projects › edge › faq.gmi captured on 2022-01-08 at 14:20:26. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-12-17)
-=-=-=-=-=-=-
Let's call it Marginalia Search as that's what most people seem to do.
There is some confusion, perhaps self-inflicted problem as I'm not really into branding and logos, and to make matters worse I've used a lot of different internal names, including Astrolabe and Edge Crawler. But most people seem to favor "marginalia search". Let's just not worry too much about what the "real" name is and use what gets the idea across.
Send me an email and we can talk about it, I'm more than happy to share, but for logistical reasons I can't just put everything on an FTP for ad-hoc access. The Works is hundreds of gigabytes of data, and much of it is in nonstandard binary formats I've designed myself to save space.
I'm currently focusing on English web content. In part this is because I need to limit the scope of the search engine. I have limited hardware and limited development time.
I'm just one person, and I speak Swedish fluently, English passably, and understand enough Latin to tell my quids from my quods, but the breadth of my linguistic capability ends there.
As such, I couldn't possibly ensure good quality search results in hundreds of languages I don't understand. Half-assed internationalization is, in my personal opinion, a far bigger insult than no internationalization.
The software is custom built in Java. I use MariaDB for some ancillary metadata.
The hardware is a single consumer-grade computer, a Ryzen 3900X with 128 Gb of RAM (without ECC). I snatched one of the few remaining Optane 900Ps and it's backing the database.
It depends when you ask, but the record is 50,000,000 documents, with room to spare for probably 50-100% more. In terms of disk size, we're talking hundreds of gigabytes.
Index size isn't a particularly good metric. It's good for marketing, but in practice an index with a million documents that are all of high quality is better than an index with a billion documents where only a fraction of them are interesting. Sorting the chaff from the wheat is a much harder problem than just building a huge pile of both.
I crawl myself. It seems to peak out at 100 documents per second.
No, and it's not trying to. It's trying to complement Google, by being good at what they are bad at. What the world needs is additional search options, not a new top dog.
It is not open source, and it will probably not be in its entirety, at least not in the foreseeable future. This isn't because what I'm doing is somehow secret and I don't want competition, but because I don't have the time to stepping up to the responsibility of maintaining an open source project of this scale, and just dumping gargantuan mountains of unmaintained code arguably harms open source more than it helps it.
I have limited time to put into maintaining this project as it is, the added work of maintaining an open source project in a responsible fashion simply is more than I can put into this project.
Most of this code is extremely specialized to this particular domain. To make this run on very limited hardware, it does one thing and only one thing while making a lot of domain-specific assumptions incorporating a bunch of weird demo scene-esque bit twiddling. The potential for adapting that type of component for other projects is very limited.
I do however plan on releasing some of the more useful pieces and specialized data structures.
Send me an email and I'll see if I can't block the domain.
Reach me at kontakt@marginalia.nu