💾 Archived View for zaibatsu.circumlunar.space › ~visiblink › phlog › 20181120 captured on 2024-07-09 at 04:23:11.

View Raw

More Information

⬅️ Previous capture (2021-12-03)

-=-=-=-=-=-=-

Well, I already posted today, so I'll post this one for 
tomorrow. It's tomorrow already in most of the world 
anyways.

I finished the automated website conversion script. 

On my server, I put the script in /opt/<website_name>

Then I created a folder at /var/gopher/<website_name>. The 
script turns all of the links on the website page into text 
files and then creates a gophermap.

You can check out the results at: 
gopher://gopher.visiblink.ca

Here's the script:


#!/bin/bash

# Clear the directory before starting (so that you can run the script as a cron job and update it at periodic intervals)

rm /var/gopher/LXer/*

# Get a list of links from the LXer page and save the list to a working file

lynx --dump http://lxer.com/module/newswire/mobile.php | awk '/http/{print $2}' | grep http > /var/gopher/LXer/working_file.txt

# Dump the link pages to text files with usable filenames (that is, without the slashes):

for i in $( cat /var/gopher/LXer/working_file.txt ); do lynx --dump -nonumbers -nolist -hiddenlinks=ignore -width 60 $i > /var/gopher/LXer/"${i////_}"; done

# Dump the usable filenames to a text file

for i in $( cat /var/gopher/LXer/working_file.txt ); do echo "${i////_}" >> /var/gopher/LXer/usable_filenames.txt; done

# Generate a file with actual web page titles (preceded by a zero for the gophermap)

for i in $( cat /var/gopher/LXer/working_file.txt ); do wget -qO- ${i} | perl -l -0777 -ne 'print 0,$1, if /<title.*?>\s*(.*?)\s*<\/title/si'>> /var/gopher/LXer/titles.txt; done

# Create a gophermap by merging the files ("paste" merges the lines of each file together, with a tab between them, which is super convenient of it to do!)

paste /var/gopher/LXer/titles.txt /var/gopher/LXer/usable_filenames.txt > /var/gopher/LXer/gophermap