As I was working on Oddmu I often considered the things I wanted to do different from how I had done things for Oddmuse.
Oddmuse is a first and foremost a CGI script written in Perl. Over the years, I have written a few web applications using the Mojolicious framework. These are all independent web servers. Apache, the actual web server, ends up being the reverse proxy for all of them.
I don’t really like this setup. Using CGI scripts optimizes a site for few requests. When nobody is requesting the site, no resources are being used, no memory is taken up, no files are open. The drawback is that every request requires a new process, and in the case of Perl, recompilation of the code. As bots rule the web and crawl sites in the name of backlink detection, marketing and search, it is increasingly difficult to optimize a site for few requests. There are always a lot of requests and when I complain about it, inevitably somebody shows up to tell me that actually that isn’t a lot of requests and did I consider caching and static files and on and on. We should all be concerned with using resources parsimoniously.
This is why Oddmu is a web server and not a CGI script. Using Go made it possible to write a web server that is fast, doesn’t use too much memory and that was still reasonably easy for me to do.
Using a statically compiled language made it impossible for me to include “plugins” that change the code at run-time. Oddmuse loads more files from a subdirectory, allowing users to override all aspects of it. I often moved code that I wanted to override for my own projects into separate functions. Configurability and rewriting the application was a goal right from the start. I was heavily influenced by Emacs where most of its code can be rewritten by users.
The drawback is that this affords a lot of trivial hacks, extensions to the markup rules, and so on. And once I started thinking about the move *away* from Oddmuse, I realised that I had achieved my own vendor lock-in. Worse, since Oddmuse has been in use over 20 years for this site, some very old pages hadn't seen any change in many years. Perhaps the markup rules had changed and it was only the cached HTML that was still being served correctly. Any change to an existing page would regenerate the HTML cache using the new rules, possibly resulting in a page that looked broken.
So this was something to avoid. What is the solution? Either write HTML directly – or limit markup to something that allows me to simplify the easy stuff and add HTML for the rest. That solution is Markdown with as little extra rules as possible.
In fact, I've managed to add just three special rules:
Wiki links with double square brackets are only useful for single-word links such as `[[RPG]]`, at least on this wiki. That's because Oddmuse follows the UseMod convention (which is also the Wikipedia convention since at the very beginning Wikipedia used UseMod) of using the underscore for spaces in page names. Almost all the previous pages therefore use an underscore in page names to represent a space. So I was faced with these options:
I decided to use a very simple implementation only: no link text override (use standard Markdown links for that) and no spaces-to-underscore magic.
Let's see how far I can take Oddmu without adding markup rules.
Another lesson I took from Oddmuse being a CGI script with a hackable markup was that it would be tricky to generate static pages for it. I was caching some of the generated HTML, which was clever and sped things up, but I also had some parts of a page that depended on other pages: searches, transclusion, and "red links" – links to missing pages showing up in a different colour or with the more traditional question-mark link. This made it hard to write static pages since every time you created or edited a page you needed to know: is this page transcluded anywhere? Every time you created a new page you needed to know: do other wiki pages link to this formerly missing page? These pages needed to be updated. This would require a link database, or a search that was aware of all the possibly dynamic linking markup, and if changes to other pages were to result, these would invalidate existing HTML caches and render these page that the user hadn't seen anew, possibly corrupting them.
In short, it was a big headache.
This is why Oddmu doesn't implement markup features that depend on the existence of other pages, nor on the content of other pages. No red links, no journal pages, no inlined search results.
And finally, I decided that I'd write the tools that I needed, that I really needed for my own maintenance into Oddmuse as subcommands for the administrator – the person with shell access, or with some sort of option to sync the files.
I didn't need to replicate grep, but when search and replace with dry runs (previews) started to get difficult, I wrote the "replace" subcommand. When I started caring about missing pages, I wrote the "missing" subcommand. When I wondered about generating page HTML and using it in other programs, I wrote the "html" subcommand. When I wondered about archiving, I wrote the "static" subcommand.
And with that it seems I have both a wiki server *and* a static site generator. I think I like it. Currently, I'm using Oddmu as a wiki server.
#Oddµ
---
Next: Phoebe lessons learned for Oddmu.
Phoebe lessons learned for Oddmu
---
Another benefit of staying as close to Markdown + HTML as possible is that "export is everything" (Phil Jones on ThroughtStorms, on a Tweet by Chris Albon). For the export of text to be useful, however, the markup has to be widely understood. This is why there is a benefit in using Markdown + HTML rather than an eclectic mix of hacked together markup.