The job I have of monitoring several servers is not that bad of a job, except when getting a call at 8:00 am (only three hours after going to bed) because the servers seem to be down.
Long story short, can't get to the servers. Call the Miami NAP (Network Access Point) [1] and we've pegged the circuit with so much traffic that nothing is getting through. Eventually the machine being attacked is located (there are several candidates to choose from) and it's shut off from the network; the traffic clears and access to the other servers is established.
Since there is a private network between the machines, I'm still able to get to the affected machine (by going through the one machine still connected, then going through the private network—the affected machine was removed from the public network) and check the logs:
>
```
Feb 28 09:27:37 nap1 kernel: NET: 2263 messages suppressed.
Feb 28 09:27:37 nap1 kernel: TCP: drop open request from 80.222.46.192/3755
Feb 28 09:27:42 nap1 kernel: NET: 1114 messages suppressed.
Feb 28 09:27:42 nap1 kernel: TCP: drop open request from 81.132.246.235/3921
Feb 28 09:27:47 nap1 kernel: NET: 1022 messages suppressed.
Feb 28 09:27:47 nap1 kernel: TCP: drop open request from 217.44.49.238/3751
Feb 28 09:27:52 nap1 kernel: NET: 1090 messages suppressed.
Feb 28 09:27:52 nap1 kernel: TCP: drop open request from 195.158.129.15/4371
Feb 28 09:27:57 nap1 kernel: NET: 1071 messages suppressed.
Feb 28 09:27:57 nap1 kernel: TCP: drop open request from 80.183.81.226/3244
```
And so on and so on …
New to me—looks like some other form of DDoS (Distributed Denial of Service) attack than the typical SYN flood. Some research later in the day revealed that is is probably a SYN flood, I had just never seen the logs produced during a SYN flood (these are servers I set up; the other servers that typically get SYN flooded were configured differently than how I would so that would explain why I didn't initially recognize this as a SYN flood). The “X messages suppressed” message is the previous message repeated X times but not logged. Going through the log file, I found 572 unique IP (Internet Protocol) addresses making over 1,750,000 fake connection requests over the span of one hour, 53 minutes and 47 seconds, or over 250 connections per second (ouch).
It got me thinking about the problem. Supposedly SYN cookies help, but in this case, I would think that having the kernel check incoming SYN requests and seeing if it is already in a SYN receive state from a given IP address/port number, then simply drop the connection and optionally ban the IP address. I mean, come on, 6,886 requests from 81.56.107.105:3588 and something weird isn't going on? Sure, it's a bit of extra processing, but such a scheme would help with SYN floods of this severity (the five lowest connection requests per second were 256/sec, 238/sec, 201/sec, 180/sec and 78/sec; a threshhold of 10/sec SYN requests from a single IP/port would be generous enough).
Hmmm … on second thought, that would help in the short run, until the script kiddies change their tactics and just start picking random port numbers, so you would end up with 5,000 connection requests from 81.56.107.105 from 5,000 different port numbers. More code to limit the number of connections per IP address per second (reguardless of port number) but that then means more processing, but would be a better longer term solution. This is something that might exist in the Linux kernel—I know it can rate shape network traffic.