Today I couldn't log into the server. `ssh` just sat there, waiting for the server. Some web page loaded a few bytes and stopped. I knew it: some idiot had once again written a bot that was hitting my web apps (also known as "expensive endpoints") because they ignored all the rules. They also didn't get caught by fail2ban because of they were using cloud services, of course. And the only thing that helps against these fuckers is banning the whole network. Time to get working.
First, I had to reboot the server via the website of my service provider.
Then I had to figure out which app it was. It's impossible for me to find out. All my expensive apps are from a directory called `/home/alex/farm`.
When I look at the system log file I, find entries like these:
grep "Out of memory" /var/log/syslog | tail | cut -b 72- Out of memory: Killed process 1525 (/home/alex/farm) total-vm:95736kB, anon-rss:7700kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0 Out of memory: Killed process 521 (node) total-vm:1268068kB, anon-rss:63440kB, file-rss:0kB, shmem-rss:0kB, UID:118 pgtables:2112kB oom_score_adj:0 Out of memory: Killed process 12302 (/home/alex/farm) total-vm:92464kB, anon-rss:54820kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0 Out of memory: Killed process 12622 (/home/alex/farm) total-vm:92420kB, anon-rss:53040kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0 Out of memory: Killed process 12628 (/home/alex/farm) total-vm:92712kB, anon-rss:51460kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:212kB oom_score_adj:0 Out of memory: Killed process 12632 (/home/alex/farm) total-vm:92556kB, anon-rss:51916kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0 Out of memory: Killed process 12206 (/home/alex/farm) total-vm:92320kB, anon-rss:57256kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0 Out of memory: Killed process 12016 (/home/alex/farm) total-vm:92292kB, anon-rss:57740kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0 Out of memory: Killed process 12689 (/home/alex/farm) total-vm:92728kB, anon-rss:57444kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:212kB oom_score_adj:0 Out of memory: Killed process 12041 (/home/alex/farm) total-vm:92484kB, anon-rss:58288kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0
This is not helping. Who is 12041? There are large lists of the process identifiers and some numbers in the log, but the name is always shortened:
grep "12041" /var/log/syslog | tail -n 3 2024-07-20T19:51:03.968678+02:00 sibirocobombus kernel: [ 3275.028117] [ 12041] 1000 12041 23121 14572 212992 4117 0 /home/alex/farm 2024-07-20T19:51:03.968882+02:00 sibirocobombus kernel: [ 3275.028363] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/monit.service,task=/home/alex/farm,pid=12041,uid=1000 2024-07-20T19:51:03.968883+02:00 sibirocobombus kernel: [ 3275.028378] Out of memory: Killed process 12041 (/home/alex/farm) total-vm:92484kB, anon-rss:58288kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:208kB oom_score_adj:0
Useless.
By that time, however, the system was slowing down again. I fired up `htop` and saw a gazillion instances of the Community Wiki script. Looking at the `/etc/log/apache2/access.log` file shows that bots were requesting the edit pages (!) of all the thousands of Community Wiki pages.
Thanks, idiots.
I needed to find where these requests where coming from. My leech-detector script was as limited to single IP numbers as fail2ban. So the first order of business was to add some functionality: If given the option `--networks`, it now looks up the network range the requests come from and reports those:
tail -n 50 /var/log/apache2/access.log | grep communitywiki | bin/admin/leech-detector --networks
This is expensive, of course, so I try to cache the Route View ASN lookups but the cache doesn't persist between calls. Using a small number of rows is the correct approach, here.
The result is a table like the one below. When I ran it, I used 400 rows or so.
+----------------+------+--------+------+---------+------------+ | IP | Hits | Bandw. | Rel. | Interv. | Status | +----------------+------+--------+------+---------+------------+ | 117.22.0.0/15 | 1 | 6K | 25% | | 200 (100%) | | 59.172.0.0/14 | 1 | 6K | 25% | | 200 (100%) | | 114.104.0.0/14 | 1 | 6K | 25% | | 200 (100%) | | 220.184.0.0/13 | 1 | 6K | 25% | | 200 (100%) | +----------------+------+--------+------+---------+------------+
If you then start using `whois` on the numbers, you'll see:
I think you know where this is going.
And so I copying the network IP, calling `whois`, verifying that it was Chinese Telecom or related, and added it to my ban-cidr script. (Note how I'm using `netfilter-persistent` these days so that I don't have to run `ban-cidr` after every server restart.)
And now I'm going to add the ranges above to the script.
Some days I feel like I'm slowly starting to ban the whole commercial cloud service internet because nothing good ever seems to come of it. Do you feel the same?
#Administration #Bots #Butlerian Jihad
journalctl --unit ngircd.service|grep "Jul 21"|wc -l 7436
More than half of them come from a single IP hosted by Octopuce:
journalctl --unit ngircd.service|grep "Jul 21.*2001:67c:288:2::231"|wc -l 4347
They all have this format:
Jul 21 11:39:51 sibirocobombus ngircd[594]: Accepted connection 16 from "[2001:67c:288:2::231]:41240" on socket 8. Jul 21 11:39:51 sibirocobombus ngircd[594]: Using X509 credentials from slot 0 Jul 21 11:39:51 sibirocobombus ngircd[594]: Shutting down connection 16 (SSL accept error, closing socket) with "[2001:67c:288:2::231]:41240" ... Jul 21 11:39:51 sibirocobombus ngircd[594]: Client unregistered (connection 16): SSL accept error, closing socket. Jul 21 11:39:51 sibirocobombus ngircd[594]: Connection 16 with "[2001:67c:288:2::231]:41240" closed (in: 0.0k, out: 0.0k).
I'm going to try and use `fail2ban` for this.
Looks like this is happening every 10 seconds:
journalctl --unit ngircd.service|grep "Jul 21.*Shutting down"|tail|cut -d ' ' -f 3 23:58:23 23:58:33 23:58:43 23:58:53 23:59:03 23:59:13 23:59:23 23:59:33 23:59:43 23:59:53
So here's my attempt at a filter, `/etc/fail2ban/filter.d/ngircd.conf`:
# Fail2Ban filter for failed ssl connections to the ngIRC daemon [INCLUDES] # Read common prefixes. If any customizations available -- read them from # common.local before = common.conf [Definition] # Example: # Shutting down connection 16 (SSL accept error, closing socket) with "[2001:67c:288:2::231]:44846" _daemon = ngircd failregex = ^%(__prefix_line)sShutting down connection [0-9]+ \(SSL accept error, closing socket\) with "<HOST>:[0-9]+" \.\.\.$ ignoreregex = [Init] journalmatch = _SYSTEMD_UNIT=ngircd.service + _COMM=ngircd # Author: Alex Schroeder
And this is the jail, `/etc/fail2ban/jail.d/ngircd.conf` – and here I want to ban them after four failed connects in a minute. Remember the default ban lasts 10 minutes. Since this attack is moving so slowly, I want to increase this ban time to an hour.
[ngircd] enabled = true findtime = 60 maxretry = 4 bantime = 60m
Let's give it a try!
systemctl reload fail2ban
Then again: fail2ban is only for future attacks. The current level of persistency also deserves its own treatment:
ipset create banlist6 hash:net family inet6 ip6tables -I INPUT -m set --match-set banlist6 src -j DROP ip6tables -I FORWARD -m set --match-set banlist6 src -j DROP ipset add banlist6 2001:67c:288:2::231
There we go.
Oh, and as soon as I did that, it switched to 91.194.61.231, but only once every two or three minutes. Still Octopuce, though.
ipset add banlist 91.194.60.0/23
1.92GiB for the bots?
It recently occurred to me that all my rants about “AI” training are rather specific and isolate certain bad actors — but that I never really talked about the larger issue here, namely that “AI” is the worst parasite the free and open web has ever seen. – Leeches, Leeches, Everywhere