💾 Archived View for thrig.me › blog › 2024 › 07 › 02 › reloading-pf-tables.gmi captured on 2024-07-09 at 01:45:55. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
# touch /etc/testhosts # grep testhosts /etc/pf.conf table <test> persist file "/etc/testhosts" # echo 192.0.2.42 | pfctl -t test -T add -f - 1/1 addresses added. # pfctl -t test -T show 192.0.2.42 # pfctl -f /etc/pf.conf # pfctl -t test -T show #
The table update method in
/blog/2024/06/18/firewall-autoban.gmi
is thus problematic as reloads of the rules or reboots of the system will wipe out dynamic addresses in the table, that is, hosts not also placed into the "/etc/testhosts" file. There could be an "@reboot" cron job to load the dynamic portions on boot, as well as an "edit and reload the firewall" wrapper script that carries out any necessary pre- and post- firewall rules change actions, but another way would be to only put addresses into the file, and when that file changes (or on firewall reload or system reboot) the new addresses will be loaded into memory. Dynamic addresses are still good if you want to expire the entries after some amount of time, and do not care if a firewall reload or system reboot wipes them out. However, if frequent firewall edits are made that may make temporary addresses a bit too dynamic, as there might be a lot of churn putting them back into memory after each edit. This may be more manageable if you have planned outage windows for firewall rule changes, as opposed to someone who is always fiddling around with the rules to try out this or that.
If you do have outage windows, do not make them overlap with the window for when the network group changes the core routers, as that risks your SSH connections failing because the packets are now being routed wrongly. True story! Also changing two things at the same time makes debugging—A? B? both A and B? neither A nor B?—that much more difficult, especially when the various teams are prone to blame some other group by default. Tribal boundaries, much?
So the requirements are to get the addresses for the firewall into a file, and should that file change, update the in-memory table. It might also be nice to have a delta of what has changed; one easy way to do this is to store the file under version control. Probably we will need to sort and unique the addresses, unless the source of the addresses confirms (in writing, via a contract, if you're into that sort of thing) that the addresses will already be in such a state. Without sorting version control could go nuts saving new orderings of the same addresses. On the other hand, sorting already sorted host lists will waste CPU and memory. Maybe only sort if the input proves not be sorted in advance, and do it once at the edge of your organization?
You could also stick the addresses into a database, but we'll be using files here, for better or worse.
You may want something modern, but git is huge, and may not be desirable to install on some hosts. rcs(1) ships with OpenBSD and probably has fewer security vulnerabilities than git, but also has a lot less tooling and people who know it. Whether folks who use git actually know how to use it is a different question—git is a website, right? There are degrees of competence and comprehension here, and users may have competence without any comprehension of why or how their git commands work.
RCS being simpler also means there is a lot less to learn.
$ rm foo* rm: foo*: No such file or directory $ touch foo $ ci -m'initial commit' -t'-test file' foo foo,v <-- foo initial revision: 1.1 done $ ls foo* foo,v
That's the rough equivalent of a "git init". Adding a new host to this file works as follows.
$ co -l -q foo $ echo 192.0.2.42 > foo $ rcsdiff -u foo =================================================================== RCS file: foo,v retrieving revision 1.1 diff -u -r1.1 foo --- foo 2025/06/30 02:20:18 1.1 +++ foo 2025/06/30 02:20:42 @@ -0,0 +1 @@ +192.0.2.42 $ echo $? 1 $ ci -m'block the rat bastards at 192.0.2.42' -q foo
Note the exit status word of rcsdiff(1); if no changes are made to the file, the status is "0":
$ co -l -q foo $ rcsdiff -q foo $ echo $? 0
This feature allows us to branch to "update the firewall" when there has been a change.
There are two options for updating the host list: clobber or merge. Clobber is for when the input is authoritative, and merge suits adding one-off hosts. A merge might look something like the following, accepting one or more hosts on standard input and merging them with the existing file. The sort(1) will however mess up any comments that might be in the file. A hash could remove duplicates while preserving line order, but is more expensive.
co -l -q foo TMPFILE=`mktemp -q -t foo.XXXXXXXXXX` cat foo - | sort -u > "$TMPFILE" mv "$TMPFILE" foo rcsdiff foo ...
A clobber simply replaces the entire contents of the file; the source is authoritative. The moreutils package has a sponge utility that better hides the mktemp(1) and mv(1) dance, at the cost of installing more stuff. If you go with a merge then you may also need tools to list and remove hosts, or to edit the file directly. Wrapper programs while more complicated than editing the file can offer sanity checks such as whether the host is already in a blacklist (or a good hosts list!), that the addresses are valid, better logging of changes, minimization of time the file is locked during checkout, etc. Since the blacklist of the previous blog post is authoritative, we'll use the clobber method.
The scripts would be annoying to pull out of a random blog post, so see the following repository for code that installs the Spamhaus DROP lists into a Packet Filter table.
https://thrig.me/src/openbsd-spamhaus-drop.git
In theory this all should work, and I am dogfooding this code on both my hosts, but neither have gone through an update, yet. (An update would be easy to test if you modify the source URL to point at a system you control and provide a file with different addresses in it, but I'm not feeling that proactive, and letting things break in production has sort of a nostalgic feel to it.)