💾 Archived View for gemini.sh0.xyz › log › 2022-09-27.gmi captured on 2023-01-29 at 03:02:15. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

Dabbling in CGI

I've always been more of a CGI guy. Output of pages are exactly what gets displayed, no pages with a single div and no content when i examine source. Kind of an odd thing to say as a full stack developer. Within Gemini there isn't any other option for dynamic data so I feel right at home.

I started down the path of automating the generation of my log's atom feed but I currently dont have a great solution for fixing the entry date and time. So for the time being the feed is a manually edited file. Each entry needs a UUID so I created a simple generator based on the file URL.

URL UUID Generator

Finally decided to go with a cgi based feed that reads the date/time from the file itself. I did find a bug, not sure if its with molly brown or amfora but if the feed being provided has a non xml extension that it is not registered as a feed. Even when the status line is `20 text/xml`. Its not the prettiest python code but it works.

#!/usr/bin/env python3

import re
import os
import time
from datetime import timezone
from datetime import datetime
import uuid

log_dir = "/usr/local/gemini/log"
url = "gemini://gemini.sh0.xyz/log"
author = "jecxjo"
title = "sh0.xyz captain's log"

updated = datetime.strptime("1970-01-01 00:00:00.0000", "%Y-%m-%d %H:%M:%S.%f")
feed_urn = uuid.uuid5(uuid.NAMESPACE_DNS, url)
body = ""

files = sorted(filter(lambda x:
    x.endswith(".gmi") and not x.endswith("index.gmi"),
    os.listdir(log_dir)))

for f in files:
    with open(os.path.join(log_dir, f)) as file:
        contents = file.read()
        entry_title = re.findall("^# .*|$", contents)[0][1:].strip()
        entry_link = "{}/{}".format(url,f)
        entry_urn = uuid.uuid5(uuid.NAMESPACE_URL, entry_link)
        entry_published = re.findall("Published: \d\d\d\d-\d\d-\d\d \d\d:\d\d|$", contents)[0][10:].strip()
        entry_updated = re.findall("Updated: \d\d\d\d-\d\d-\d\d \d\d:\d\d|$", contents)[0][8:].strip()

        if not entry_published:
            continue

        if not entry_updated:
            entry_updated = entry_published

        entry_published = datetime.strptime(entry_published, '%Y-%m-%d %H:%M')
        entry_updated = datetime.strptime(entry_updated, '%Y-%m-%d %H:%M')

        body += "  <entry>\n"
        body += "    <title>{}</title>\n".format(entry_title)
        body += "    <link href=\"{}\"/>\n".format(entry_link)
        body += "    <id>urn:uuid:{}</id>\n".format(entry_urn)
        body += "    <published>{}</published>\n".format(entry_published.isoformat('T'))
        body += "    <updated>{}</updated>\n".format(entry_updated.isoformat('T'))
        body += "  </entry>\n\n"

        if entry_updated > updated:
            updated = entry_updated

# Printing

print("20 text/xml\r")

print("<?xml version=\"1.0\" encoding=\"utf-8\"?>")
print("<feed xmlns=\"http://www.w3.org/2005/Atom\">")
print("")
print("  <title>{}</title>".format(title))
print("  <link href=\"{}\"/>".format(url))
print("  <updated>{}</updated>".format(updated.isoformat('T')))
print("  <author>")
print("    <name>{}</name>".format(author))
print("  </author>")
print("  <id>urn:uuid:{}</id>".format(feed_urn))

print("")
print(body)
print("</feed>")

$ published: 2022-09-27 11:18 $

$ updated: 2022-10-01 00:33 $

$ tags: programming, gemini $

--- CC-BY-4.0 jecxjo 2022-09-27

back