I’m using fail2ban to watch over my sites. Or at least I thought I did!
fail2ban to watch over my sites
Apparently, `fail2ban` didn’t notice a thing.
But zoom out and you’ll notice that the rules for Apache and Gopher aren’t seeing any matches.
What happened? I don’t know. But I suspect this is due to me removing IP numbers from the web server logs. This meant that the fail2ban rules no longer matched. Not once. Somebody started unleashing another bot and here we are, once again fighting the leeches.
Well, I enabled the logging of IP numbers again and I see fail2ban has started banning some IP numbers again. But unfortunately, load remains an issue.
So now I’ve started the process of tearing the farm apart.
1. the processes were all named `farm` so I couldn’t tell them apart
2. Mojolicious offers a status plugin which doesn’t work with Toadfarm
Now now I’m removing the applications one by one from Toadfarm, using *Hypnotoad* to run them.
1. uncomment the application in the Toadfarm script
2. assign them a separate port in the Apache config
3. create a separate Monit config for each application
4. create a separate Hypnotoad wrapper script for each application
See if it works.
Current status: 4/27 done. I have a bad feeling about this.
#Web #Administration #fail2ban
(Please contact me if you want to remove your comment.)
⁂
Status: still fucked.
CPU at 100% for 100% of the time
– Alex Schroeder 2019-06-27 07:02 UTC
---
Migrated more applications from Toadfarm to Hypnotoad. Looked at the log files and banned a bot via UserAgent because their webpage said they only looked at the directives for all bots. Witnessed a dozens of requests for the same page in a short time, from different IP numbers, with different user agents, checked it, and saw that it was spammed. They had made 307 edits in four days, starting June 23. Hm. Look at the graph above... Perhaps they are the problem?
+-----------------+---------+ | log file | matches | +-----------------+---------+ | access.log | 125 | | access.log.1 | 584 | | access.log.2.gz | 726 | | access.log.3.gz | 870 | | access.log.4.gz | 556 | +-----------------+---------+
My wikis don’t react well when lots of hits arrive in bursts, sadly. That’s what fail2ban is supposed to stop. We’ll see how it goes. I’m not too hopeful because I don’t think these are actually a lot of hits.
Looking at the current log file:
+----------------+------+------+ | hour | hits | rel. | +----------------+------+------+ | 27/Jun/2019:13 | 17 | 13% | | 27/Jun/2019:12 | 21 | 16% | | 27/Jun/2019:11 | 18 | 14% | | 27/Jun/2019:10 | 15 | 11% | | 27/Jun/2019:09 | 17 | 13% | | 27/Jun/2019:08 | 16 | 12% | | 27/Jun/2019:07 | 11 | 8% | | 27/Jun/2019:06 | 13 | 10% | +----------------+------+------+
The data for Apache accesses also makes me think that the number of hits isn’t the problem, here.
Hits don't grow like CPU% does
As for the conversion from Toadfarm to Hypnotoad: just four applications left to do! 🙂
– Alex Schroeder 2019-06-27 11:24 UTC
---
Also reducing the number of preforked worked threads to 1 for almost all apps. The only exceptions are *Emacs Wiki* (3) and *Campaign Wiki* (2), as these seem to be the most popular ones, based on domain names.
+-------------------+-------+------+ | domain | hits | rel. | +-------------------+-------+------+ | www.emacswiki.org | 17831 | 51% | | campaignwiki.org | 6424 | 18% | | alexschroeder.ch | 5969 | 17% | | communitywiki.org | 2069 | 6% | +-------------------+-------+------+
I think that helps? That was also a benefit of the Toadfarm. You didn’t have workers sleeping for every app! Now I do. Can’t go below zero, though. 🙂
Number of processes going down
– Alex Schroeder 2019-06-27 11:37 UTC
---
Well, right now it doesn’t look so bad!
– Alex Schroeder 2019-06-27 11:59 UTC
---
Yes, looks like I’m back from the brink.
But to be honest I’m not sure what did it. Perhaps moving from Toadfarm to Hypnotoad allowed me to see that a lot of processes were serving the *Food* wiki even though it wasn’t getting a lot of hits? Or perhaps that was simply half the story and the other half was `fail2ban` not working? I’m not sure. Nothing helps against distributed denial of service attacks, basically.
I think I’d need a “shields up” mode. Something that I could trigger from a job when load reaches goes over twenty or thirty? Perhaps there’s a way to write a job that just runs once a day and starts all the Monit services that are no longer running as currently I only attempt to restart them a handful of times.
check process alexschroeder with pidfile ... start program = "..." as uid alex gid alex stop program = "..." as uid alex gid alex if failed host ... port ... type tcp protocol http and request "..." for 5 cycles then restart if totalmem > 500 MB for 5 cycles then restart if 3 restarts within 15 cycles then timeout
I’ve just set `set daemon 300` (up from 120) so now it checks services every five minutes.
If it can’t reach the site five times, i.e. 25min (up from 10m) it restarts the services and if there were already 3 restarts in 75min (up from 30min) the service goes down and I have to restart it manually after investigating.
Sadly my impression is that “timeout” doesn’t actually *stop* services. It’s just Monit that goes stops monitoring them. The website continues to struggle and eat up resources. I think that’s what I would need: I should use stop instead of timeout/unmonitor.
stop instead of timeout/unmonitor
– Alex Schroeder 2019-06-27 14:47 UTC
---
At least fail2ban seems to be working again:
– Alex Schroeder 2019-07-04 08:18 UTC
---
Ok, I decided to look at the `/var/log/fail2ban.log` file.
2019-07-04 09:37:41,759 fail2ban.actions [459]: NOTICE [alex-apache] Ban 64.62.202.71 2019-07-04 09:40:39,990 fail2ban.actions [459]: NOTICE [alex-apache] Ban 62.210.139.12 2019-07-04 09:47:42,540 fail2ban.actions [459]: NOTICE [alex-apache] Unban 64.62.202.71 2019-07-04 09:50:40,779 fail2ban.actions [459]: NOTICE [alex-apache] Unban 62.210.139.12
Hm... Let’s see who offends the most. Let’s write `fail2ban-report.pl`.
#!/usr/bin/env perl use Time::ParseDate; while (<STDIN>) { next if /Unban|already banned/; m/^([0-9: -]+).*Ban (\S+)$/ or warn "Cannot parse:\n$_" and next; my ($date, $ip) = ($1, $2); $time = parsedate($date); $count{$ip}++; $first{$ip} = $time unless $first{$ip}; $last{$ip} = $time; $total++; } @result = sort {$count{$b} <=> $count{$a}} keys %count; print " ip #bans bans% interval\n"; foreach my $ip (@result) { $avg = 0; if ($first{$ip} and $last{$ip} and $count{$ip} > 1) { $avg = ($last{$ip} - $first{$ip}) / ($count{$ip} -1) / 3600; } printf ("% 40s %10d %3d%% %7s\n", $ip, $count{$ip}, 100 * $count{$ip} / $total, $avg ? sprintf('%3.2fh', $avg) : ''); }
And call it:
/home/alex/bin/fail2ban-report.pl < /var/log/fail2ban.log | head ip #bans bans% interval 62.210.180.146 69 10% 1.04h *** 60 8% 1.68h 62.210.139.12 54 7% 1.33h 62.210.180.164 52 7% 1.18h 62.210.83.206 47 6% 1.54h 62.210.177.44 32 4% 2.22h *** 28 4% 2.61h *** 23 3% 2.53h *** 16 2% 4.56h
This `62.210.*.*` number sure shows up a lot! `whois 62.210.180.146` tells me that `62.210.128.0 - 62.210.255.255` is owned by a French hosting and services provider. Oh well.
Let’s see what they do:
alexschroeder.ch:443 62.210.180.146 - - [04/Jul/2019:08:07:22 +0200] "POST /software HTTP/1.0" 403 5976 "https://alexschroeder.ch/software/Comments_on_Merge_Venus_dev_branch" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36"
That’s right, they’re trying to spam my wiki! They’re already getting a 403 response, but I really hate those spammers.
I recently rebooted the server so all my blacklists are gone. But we can get them back. Let’s start with the following:
ipset create blacklist hash:ip hashsize 4096 iptables -I INPUT -m set --match-set blacklist src -j DROP iptables -I FORWARD -m set --match-set blacklist src -j DROP ipset add blacklist 62.210.180.146 ipset add blacklist 62.210.180.164 ipset add blacklist 62.210.177.44 ipset add blacklist 62.210.139.12 ipset add blacklist 62.210.83.206
That’s right, I added the last one as well. I checked and the `62.210.0.0 - 62.210.127.255` range belongs to the same provider.
Check the setup:
iptables -L ipset list blacklist
– Alex Schroeder 2019-07-04 09:03 UTC
---
I also checked #2 in the list above. I checked the logs and noticed that they make two HEAD and a GET request for all of the new page on my site. Looks like they’re checking all the links in the RSS feed, perhaps?
grep *** /var/log/apache2/access.log | /home/alex/bin/leech-detector ip hits bandw. hits% interv. status code distrib.
43 hits doesn’t look like much for today, but they did get banned 60 times by `fail2ban`. Remember the data up there? Once every 2h at least! That must be the 503 result. Let’s look at these patterns.
grep *** /var/log/apache2/access.log | cut -c 50-69,77,79- | sed -e 's/Friendica.*/Friendica.../' -e 's/".*"/ /' -e 's/HTTP\/1.1 //' 04/Jul/2019:07:05:03 GET /wiki/feed/full/ Friendica... 04/Jul/2019:07:05:04 HEAD /wiki/2019-06-25_Reflowing_arbitrary_text Friendica... 04/Jul/2019:07:05:04 HEAD /wiki/2019-06-25_Reflowing_arbitrary_text Friendica... 04/Jul/2019:07:05:04 GET /wiki/2019-06-25_Reflowing_arbitrary_text Friendica... 04/Jul/2019:07:05:05 HEAD /wiki/2019-06-26_Gopher_to_HTTP_and_HTML_Proxy Friendica... 04/Jul/2019:07:05:05 HEAD /wiki/2019-06-26_Gopher_to_HTTP_and_HTML_Proxy Friendica... 04/Jul/2019:07:05:06 GET /wiki/2019-06-26_Gopher_to_HTTP_and_HTML_Proxy Friendica... 04/Jul/2019:07:05:06 HEAD /wiki/2019-06-26_Hiding_your_address Friendica... 04/Jul/2019:07:05:07 HEAD /wiki/2019-06-26_Hiding_your_address Friendica... 04/Jul/2019:07:05:07 GET /wiki/2019-06-26_Hiding_your_address Friendica... 04/Jul/2019:07:05:08 HEAD /wiki/2019-06-26_Privacy_vs._fail2ban Friendica... 04/Jul/2019:07:05:08 HEAD /wiki/2019-06-26_Privacy_vs._fail2ban Friendica... 04/Jul/2019:07:05:09 GET /wiki/2019-06-26_Privacy_vs._fail2ban Friendica... 04/Jul/2019:07:05:09 HEAD /wiki/2019-06-29_Microsoft_illustrates_why_DRM_is_shit Friendica... 04/Jul/2019:07:05:10 HEAD /wiki/2019-06-29_Microsoft_illustrates_why_DRM_is_shit Friendica... 04/Jul/2019:07:05:11 GET /wiki/2019-06-29_Microsoft_illustrates_why_DRM_is_shit Friendica... 04/Jul/2019:07:05:11 HEAD /wiki/2019-06-29_Property Friendica... 04/Jul/2019:07:05:12 HEAD /wiki/2019-06-29_Property Friendica... 04/Jul/2019:07:05:12 GET /wiki/2019-06-29_Property Friendica... 04/Jul/2019:07:05:12 HEAD /wiki/2019-07-01_Orcs Friendica... 04/Jul/2019:07:05:13 HEAD /wiki/2019-07-01_Orcs Friendica... 04/Jul/2019:08:35:03 GET /wiki/feed/full/ Friendica... 04/Jul/2019:08:35:03 HEAD /wiki/2019-06-25_Reflowing_arbitrary_text Friendica... 04/Jul/2019:08:35:04 HEAD /wiki/2019-06-25_Reflowing_arbitrary_text Friendica... 04/Jul/2019:08:35:04 GET /wiki/2019-06-25_Reflowing_arbitrary_text Friendica... 04/Jul/2019:08:35:04 HEAD /wiki/2019-06-26_Gopher_to_HTTP_and_HTML_Proxy Friendica... 04/Jul/2019:08:35:05 HEAD /wiki/2019-06-26_Gopher_to_HTTP_and_HTML_Proxy Friendica... 04/Jul/2019:08:35:05 GET /wiki/2019-06-26_Gopher_to_HTTP_and_HTML_Proxy Friendica... 04/Jul/2019:08:35:06 HEAD /wiki/2019-06-26_Hiding_your_address Friendica... 04/Jul/2019:08:35:06 HEAD /wiki/2019-06-26_Hiding_your_address Friendica... 04/Jul/2019:08:35:06 GET /wiki/2019-06-26_Hiding_your_address Friendica... 04/Jul/2019:08:35:07 HEAD /wiki/2019-06-26_Privacy_vs._fail2ban Friendica... 04/Jul/2019:08:35:07 HEAD /wiki/2019-06-26_Privacy_vs._fail2ban Friendica... 04/Jul/2019:08:35:07 GET /wiki/2019-06-26_Privacy_vs._fail2ban Friendica... 04/Jul/2019:08:35:08 HEAD /wiki/2019-06-29_Microsoft_illustrates_why_DRM_is_shit Friendica... 04/Jul/2019:08:35:08 HEAD /wiki/2019-06-29_Microsoft_illustrates_why_DRM_is_shit Friendica... 04/Jul/2019:08:35:09 GET /wiki/2019-06-29_Microsoft_illustrates_why_DRM_is_shit Friendica... 04/Jul/2019:08:35:09 HEAD /wiki/2019-06-29_Property Friendica... 04/Jul/2019:08:35:09 HEAD /wiki/2019-06-29_Property Friendica... 04/Jul/2019:08:35:10 GET /wiki/2019-06-29_Property Friendica... 04/Jul/2019:08:35:10 HEAD /wiki/2019-07-01_Orcs Friendica... 04/Jul/2019:08:35:10 HEAD /wiki/2019-07-01_Orcs Friendica... 04/Jul/2019:08:35:11 GET /wiki/2019-07-01_Orcs Friendica...
Let’s list the things I am unhappy with:
1. two HEAD requests when one would have done
2. it sends all the requests until it gets a 503
3. the next time it shows up, it hasn’t slowed down
So what I think I’m going to do is simply add them to the User Agent based rules in my Apache site configuration:
RewriteEngine on RewriteCond "%{HTTP_USER_AGENT}" "Mastodon|Pcore|MegaIndex|Friendica" RewriteRule ".*" "-" [redirect=403,last]
There you go. Sorry, Friendica! I’m suspecting somebody has added my blog’s RSS feed to their Friendica instance, and that’s a very flattering thing, but I don’t think this is how feed aggregators ought to work.
– Alex Schroeder 2019-07-04 09:22 UTC