I’m hosting my sites on a tiny server. At the same time, it’s all dynamic web apps and wikis and CGI scripts that take CPU and memory resources. Not a problem, if humans are browsing the site. If people try to download the entire site using automated tools that don’t wait between requests (leeches), or don’t check the meta information included in HTTP headers and HTML tags, then they overload my sites, and they get lost in the maze of links: page history, recent changes, old page revisions, you can download for ever if you’re not careful.
Enter *fail2ban*. This tool watches log files for regular expressions (filters) and if it find matches, it adds IP numbers to the firewall. You can then tell it which filters to apply to which log files and how many hits you’ll allow (jail).
When writing the rules, I just need to be careful: it’s OK to download a lot of static files. I just don’t want leeches or spammers (trying to brute force the questions I sometimes ask before people get to edit their first page on my sites).
Here’s my setup:
`/etc/fail2ban/filter.d/alex-apache.conf`
This is for the Apache web server with virtual hosts. The comment shows an example entry.
Notice the `ignoreregex` to make sure that some of the apps and directories don’t count.
Note that only newer versions of `fail2ban` will be able to match IPv6 hosts.
# Author: Alex Schroeder <alex@gnu.org> [Definition] # ANY match in the logfile counts! # communitywiki.org:443 000.000.000.000 - - [24/Aug/2018:16:59:55 +0200] "GET /wiki/BannedHosts HTTP/1.1" 200 7180 "https://communitywiki.org/wiki/BannedHosts" "Pcore-HTTP/v0.44.0" failregex = ^[^:]+:[0-9]+ <HOST> # Except cgit, css files, images... # alexschroeder.ch:443 0:0:0:0:0:0:0:0 - - [28/Aug/2018:09:14:39 +0200] "GET /cgit/bitlbee-mastodon/objects/9b/ff0c237ace5569aa348f6b12b3c2f95e07fd0d HTTP/1.1" 200 3308 "-" "git/2.18.0" ignoreregex = ^[^"]*"GET /(robots\.txt |favicon\.ico |[^/ ]+.(css|js) |cgit/|css/|fonts/|pics/|1pdc/|gallery/|static/|munin/|osr/|indie/|face/|traveller/|hex-describe/|text-mapper/)
`/etc/fail2ban/filter.d/alex-gopher.conf`
Yeah, I also make the wiki available via gopher...
# Author: Alex Schroeder <alex@gnu.org> [Init] # 2018/08/25-09:08:55 CONNECT TCP Peer: "[000.000.000.000]:56281" Local: "[000.000.000.000]:70" datepattern = ^%%Y/%%m/%%d-%%H:%%M:%%S [Definition] # ANY match in the logfile counts! failregex = CONNECT TCP Peer: "\[<HOST>\]:\d+"
`/etc/fail2ban/jail.d/alex.conf`
Now I need to tell `fail2ban` which log files to watch and which filters to use.
Note how I assume a human will basically click a link every 2s. Bursts are OK, but 20 hits in 40s are the limit.
Notice that the third jail just reuses the filter of the second jail.
[alex-apache] enabled = true port = http,https logpath = %(apache_access_log)s findtime = 40 maxretry = 20 [alex-gopher] enabled = true port = 70 logpath = /home/alex/farm/gopher-server.log findtime = 40 maxretry = 20 [alex-gopher-ssl] enabled = true filter = alex-gopher port = 7443 logpath = /home/alex/farm/gopher-server-ssl.log findtime = 40 maxretry = 20
#Administration #Web #fail2ban
(Please contact me if you want to remove your comment.)
⁂
Blimey. I don’t remember the last time anyone did anything gophery I noticed.
– Blue Tyson 2019-01-30 10:47 UTC
---
The #gopher discussion is alive and well on Mastodon... 🙂
– Alex Schroeder 2019-01-30 13:11 UTC
---
Something to review when I have a bit of time: Web Server Security by @infosechandbook.
– Alex Schroeder 2019-02-01 18:11 UTC
---
OK, I added a meta rule: If people get banned a few times, I want to ban them for longer periods! (But see below! There is a better solution.)
This is `filter.d/alex-fail2ban.conf`:
# Author: Alex Schroeder <alex@gnu.org> [Init] # 2019-07-07 06:45:45,663 fail2ban.actions [459]: NOTICE [alex-apache] Ban 187.236.231.123 datepattern = ^%%Y-%%m-%%d %%H:%%M:%%S [Definition] failregex = NOTICE .* Ban <HOST>
And my jail in `jail.d/alex.conf` gets a new section that uses this filter:
[alex-fail2ban] enabled = true # all ports logpath = /var/log/fail2ban.log # ban repeated offenders for 6h: if you get banned three times in an # hour, you're banned for 6h bantime = 6h findtime = 1h maxretry = 3
– Alex Schroeder 2019-07-10 10:36 UTC
---
Oh, and if you’re curious, here’s my `fail2ban` cheat sheet. Remember, `fail2ban` has a separate blacklist!
# Get all the jails fail2ban-client status # List banned IPs in a jail fail2ban-client status alex-apache # Unban an IP fail2ban-client unban 127.0.0.1
– Alex Schroeder 2019-07-10 10:38 UTC
---
As you can see, Munin is picking up the new rule, but apparently all the bans are due to Apache logs.
I’m quite certain that my SSH bans are zero because I’m running SSH on a non-standard port... 😇 I know, some people disapprove. But I say: everything else being the same, running it on a separate port simply reduces the number of drive-by attacks, in other words if you’re not being targeted specifically but *incidentally*, then having moved to a non-standard port helps.
– Alex Schroeder 2019-07-10 13:06 UTC
---
@MrManor recently told me that fail2ban watching its own logs is already in `jail.conf`: look for `[recidive]`.
This is what it has:
[recidive] logpath = /var/log/fail2ban.log banaction = %(banaction_allports)s bantime = 1w findtime = 1d
There’s also a warning regarding `fail2ban.conf`, saying I must change `dbpurgeage`. No problem:
dbpurgeage = 648000
All I need to do is enable it in `jail.d/alex.conf` by writing:
[recidive] enabled = true
Now the files `filter.d/alex-fail2ban.conf` and the section `[alex-fail2ban]` in my `jail.d/alex.conf` are both unnecessary.
– Alex Schroeder 2019-07-30 06:25 UTC
---
These days I no longer check for Gopher using fail2ban because I’m using a different solution for Phoebe (my Gemini wiki, which also serves Gopher).
When I look at my Gemini logs, I see that plenty of requests come from Amazon hosts. I take that as a sign of autonomous agents. I might sound like a fool on the Butlerian Jihad, but if I need to block entire networks, then I will. – 2020-12-25 Defending against crawlers
2020-12-25 Defending against crawlers
– Alex 2021-08-22 11:26 UTC
---
This works when you know the protocol and port and there is no
multiplexing, so tracking new connections is meaningful when a failed
attempt requires a new one. – Juan
This is the example from the original mail:
cat /etc/iptables/rules.v4 # Generated by iptables-save v1.4.21 on Tue Feb 16 15:42:27 2016
He continues:
(you can also do it for ipv6, and I do!)
Essentially, if you get to 6 new open connections to port 22, packets
from that IP are dropped for a minute.
That disrupts any SSH scan!
In a case of a false positive, I would have to wait for a minute, which
is not to bad. I fact I should increase the ban to 3 or 5 minutes, and
that would be even more effective for SSH brute-force attacks because
some dumb scripts will continue scanning after those 60 seconds; but
most attacker just give up when the packets are dropped.
I've been using this for ever essentially, because I'm too lazy to setup
fail2ban.
Let’s assume we want to rate limit incoming requests to our web server to prevent Denial-of-Service (DoS) attacks and ensure fair resource allocation among users. We can achieve this using the *iptables conntrack* extension… – 4.2. Web Server Protection