💾 Archived View for thrig.me › blog › 2024 › 06 › 18 › firewall-autoban.gmi captured on 2024-07-09 at 01:46:28. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2024-06-20)
-=-=-=-=-=-=-
Update July 2024: maybe see instead:
/blog/2024/07/02/reloading-pf-tables.gmi
for better automation.
Firewall autobans may not be a good idea as modern IPv4 networks and ASN can be traded around a lot, or due to cgNAT someone can use a random cellphone to maybe get large ranges of IP addresses automatically blacklisted should the auto-ban code be not very clever. There may also be forged packets; blacklists based on random SYN packets may be a bad idea, especially if an attacker can figure out how to make your system denial-of-service itself. Memory limits may be a problem on smaller systems: is there enough memory to run all the services and to hold all the blacklist IP addresses in memory, or will a maybe slower filesystem lookup be necessary? Without bans, however, remote addresses can be nuisances or worse to public facing services.
One opinion here is to "block drop" by default and simply ignore most of the noise—how many stress points do you have free to worry about firewall logs? "block drop" for OpenBSD's packet filter will hang legitimate connections for a while, though I favor drop by default as the various "return" forms run afoul legitimate yet buggy client systems sending—hypothetically, of course—6,000 packets per second at the firewall which then replies to each packet (whoops, 12,000 pps) and maybe you have logging over syslog, and now the excess traffic is spilling over and degrading or failing other services? Misery loves company. Rate limits on connections, rate limits on logging, doing aggregation of logs, etc. may help. I may configure "block return" for particular internal addresses if there's a client that needs a faster response (maybe it's latency sensitive?) when something is blocked, and I know that there are not chatty and buggy (e.g. Windows) systems on the network.
AppleTalk was also pretty chatty, and this was much worse back in the olden times when too many systems sending too much traffic would cause ethernet to fall apart at rather low network utilization levels. If you've got an unfamiliar network you'll probably want to log network traffic at various points to get the lay of the land, figure out when those IPX spamming multiplayer Doom sessions happen, etc.
With some form of autoblacklisting, a whitelist of known good addresses is good so that it is more difficult to blacklist your own addresses, as presumably the firewall checks your good ones first and allows them in. This should be for remote access protocols such as wireguard or SSH. I now only whitelist a few subnets for SSH, on account of the otherwise absurd levels of attacks against it. Alternate ports complicate configuration and tend to be found, eventually. It did take them a while when I had SSH on TCP/2, and if more people start doing alternate port SSH daemons then more attackers will eventually look for those.
# grep good /etc/pf.conf table <good> persist file "/etc/goodhosts" pass in quick on $pub_if proto udp from <good> to any port 4433 keep state set queue ssh pass in quick on $pub_if proto tcp from <good> to any port 22 keep state set queue ssh
And somewhere else anything else to ports 22 or 4433 is blocked.
If you actually have a budget, there are various commercial blacklist offerings that will probably save you time, and, more importantly, may provide someone else to blame when a blacklist goes awry. Over yonder, that's where the buck stops! Some may recognize this from HIPAA or PCI/SOX compliance. I've generally managed to never have a budget so have no recommendations here. Attackers can also obtain access to blacklists, and launch attacks from not blacklisted hosts. Probably what the internet needs is fewer hacked hosts and more non-routing of known bad actors, but that would trend towards nation or nation-group firewalls, which will bring good, and bad.
A slow rollout may help, e.g. to only log what would be blocked and then review those logs for trouble before turning on a new blacklist. A problem here is that software may run infrequently (quarterly external payments from a sketchy network) so there are risks of those jobs being caught in the future, or having to collect logs for too long to see if anything rare will be blocked. Larger sites may accumulate all sorts of weird and generally undocumented workflows, especially if there is a lot of churn in the IT department and little to no standardization enforced.
https://www.spamhaus.org/blocklists/do-not-route-or-peer/
Probably safe to block by default. Or, a gateway blacklist, and then you can move onto harder blacklists, maybe ones that include Google or AWS, often with good cause, due to the scanning and spam happening from the clouds, but then maybe you or your customers also use those services? Anyways, the DROP list is probably safe, though with any external blacklist you are trusting that someone else hasn't screwed up (or you could consume multiple sources and weigh them according to some metric of trust before deciding to blacklist a particular address, but that's more complicated and consumes more energy).
If you have hosts on various subnets you could use those to collect logs and maintain an IP address reputation database, and possibly upload that information elsewhere, but attackers probably also upload their "findings" from sockpuppet accounts. One attack here would be to try to get some website or organization on all the blacklists, e.g. pay us and we'll get so-and-so blacklisted by everyone else. This is not too different from "pay us or we'll blacklist your systems elsewhere" but a discussion of how much overlap there is on a venn diagram of criminals, governments, and the rich probably should happen elsewhere.
Also blacklists may have errors. A "power user" who opens a lot of SSH connections looks much the same to a firewall as a bot trying to guess all the passwords. Education can help, as SSH has a "ControlMaster auto" setting that will make new SSH connections reuse an existing connection. Still, there can be a risk of false positives, and banning the web guy because they are busy uploading a big project might be bad. Whitelisting such things as the external addresses of the web guy may also help, assuming the whitelist is kept up-to-date. (That site had for historical reasons TCP/22 open to the Internet on all subnets, a practice increasingly less viable as the internet progressed.)
A simple implementation puts the blacklist entries directly into the bad table. This lacks various perhaps necessary features such as expiring no longer blacklisted subnets, or means to better figure out from what blacklist a particular bad IP address came from. On the other hand, it is simple, and one can put comments into the /etc/badhosts file, and any entries added by the update-spamhaus script will be expired on reboot. I generally favor rebooting hosts more often than others, as how do you know whether the host will reboot okay without rebooting it? Waiting forever to learn this detail may result in longer outages as you try to figure out all the broken things that may have been caught by more frequent reboots.
#!/bin/ksh # update-spamhaus - stick DROP addresses into the "bad" table function after { mtime=`stat -f %m "$1" 2>/dev/null` [[ -z $mtime ]] && return 0 if [[ $((`date +%s` - $2)) -gt $mtime ]]; then return 0 else return 1 fi } function updrop { file=drop_v"$1".json after "$file" 3600 || return ftp -o "$file" https://www.spamhaus.org/drop/"$file" || return spamhaus-drop < "$file" | pfctl -t bad -vvT add -f - } updrop 4 updrop 6
Spamhaus advises not hitting up the files more often than once per hour, so hence the probably unportable modification time checks, this code being for OpenBSD. "spamhaus-drop" is a custom script that parses the "cidr" out of the JSON files, though this could also be done with a bigger JSON tool (e.g. jq(1)) or handled less ideally with awk, though odds are the format will not change? It's compiled static so the server does not need jansson installed; there are tradeoffs here.
// spamhaus DROP parser // doas pkg_add jansson // CFLAGS="`pkg-config --cflags --libs jansson` -static" \ // make spamhaus-drop #include <err.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <jansson.h> // input should contain one or more cidr objects, and then a coda entry: // ... // {"cidr":"223.254.0.0/16","sblid":"SBL212803","rir":"apnic"} // {"type":"metadata","timestamp":1718576134,"size":85375,"records":1399,"copyright":"(c) 2024 The Spamhaus Project SLU","terms":"https://www.spamhaus.org/drop/terms/"} inline static void emit_cidr(const json_t *obj, size_t linenum) { json_t *cidr = json_object_get(obj, "cidr"); if (!cidr) { json_t *type = json_object_get(obj, "type"); if (type) return; errx(1, "unknown entry -:%zu", linenum); } const char *addr = json_string_value(cidr); if (!addr) errx(1, "no string -:%zu", linenum); printf("%s\n", addr); } int main(int argc, char *argv[]) { char *line = NULL; size_t linesize = 0; ssize_t linelen; size_t linenum = 1; #ifdef __OpenBSD__ if (pledge("stdio", NULL) == -1) err(1, "pledge"); #endif while ((linelen = getline(&line, &linesize, stdin)) != -1) { json_error_t error; json_t *obj; obj = json_loads(line, 0, &error); if (!obj) errx(1, "json error -:%d: %s", error.line, error.text); emit_cidr(obj, linenum); json_decref(obj); ++linenum; } //free(line); exit(EXIT_SUCCESS); }
# sed 3q drop_v4.json {"cidr":"1.10.16.0/20","sblid":"SBL256894","rir":"apnic"} {"cidr":"1.19.0.0/16","sblid":"SBL434604","rir":"apnic"} {"cidr":"1.32.128.0/18","sblid":"SBL286275","rir":"apnic"} # sed 3q drop_v4.json | spamhaus-drop 1.10.16.0/20 1.19.0.0/16 1.32.128.0/18 # grep bad /etc/pf.conf table <bad> persist file "/etc/badhosts" block drop quick on $pub_if from <bad>
You may also want to block traffic to (not just from) the bad hosts, especially if there are client systems prone to being hacked somewhere behind the firewall. For such networks it may also make sense to limit (and log) outgoing SSH connection rates, so that hacked systems cannot do too much damage before they are noticed and taken off the network.
A small risk here is someone taking over the website or forging the DNS of spamhaus, in which case an attacker could ship who knows what IP addresses to be blacklisted, or could ship a huge file to fill up a disk with. This is unlikely, as if they can hijack a website or DNS there are much worse things they could do than to toy with a blacklist. Still, you may want to review the changes, or have an alert if "too many" addresses change, and maybe do some spot checks on why various ranges are banned now and then. The blacklists could be saved under version control which would allow for diff of the changes over time.
Logging probably should be enabled now and then (if not always on) so you can check what the rules are doing; a small wrapper script for that is the following.
#!/bin/sh # fwlog - show the firewall logs as detailed in pflogd(8) # fwlog # fwlog /var/log/pflog [ -z "$1" ] && exec tcpdump -n -e -ttt -i pflog0 exec tcpdump -n -e -ttt -r "$1"
Bad hosts (actually addresses or subnets) probably need a wrapper script, unless you're better at remembering which -t or -T does what. This script could be made more complicated by adding flags to also add the input to the permanent /etc/badhosts file, to do DNS to IP address lookups, to check whether the input (and also the rest of the subnet?) is on a RBL, to rollup IPv6 addresses to a /64, etc. But you have to start somewhere with these sorts of wrapper scripts, and avoiding the dreaded "second system syndrome" might be good?
#!/bin/sh # badhost - add something to the bad table exec pfctl -t bad -T add "$1"