💾 Archived View for paritybit.ca › arboretum › sysadmin › server-security.gmi captured on 2023-01-29 at 03:02:06. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
Notes about server security. Aimed at self-hosters and homelabbers.
Most of the advice about how to secure your server is not based on rational thought and is instead based on cargo cult dogma. This is especially true in the Linux space. I have many strong opinions on server security, having gone through several cycles of following baseless advice, getting burned by it, and then learning more.
The most important thing when it comes to implementing security measures is to really think hard about what the actual effects of your actions are. Security is complicated. Know what you're trying to secure against, and secure against that. As always, you need to strike a balance between security and convenience that works for you, and there is a point where you can say "this is sufficiently secure".
If you don't want to read this, at least watch this (but also watch this if you read this):
How To Protect Your Linux Server From Hackers! by LiveOverflow
I make sure the software on my servers stays up to date. This is probably the most important aspect of server security next to making sure your passwords are strong enough. Depending on the server, I may prefer being notified about updates over having my servers auto-update so I have the opportunity to check the changelogs myself and see if I need to make configuration changes. Also then I can choose to update at a time that will be least disruptive to those who use the things I host. I have been burned in the past by enabling unattended-upgrades (Debian/Ubuntu) or other auto-updating features on servers only to have services mysteriously go offline because an update caused an issue.
I try to avoid the kind of software installed with `curl | sudo bash` or similar, because you have to watch for updates separately and it doesn't all get updated when you run a system or package update. (Also it's terribly insecure compared to packages provided by your OS/distro.)
Another nice way of keeping track of security updates in particular is to subscribe to RSS feeds or mailing lists which send out notices when some software needs to be updated.
I prefer to use password-protected keys for authentication as opposed to password-based authentication mostly because of convenience. A password-protected key is a must because that protects against accidental leaks of the keyfile (it gives you time to change your key before someone can use it to log in somewhere). I disable password-based login on my servers because I don't use it and I don't want that to be a possible way in to the server (i.e. doing so reduces attack surface).
However, note that it is not insecure to use a sufficiently strong password with password-based authentication.
Enabling 2FA might be useful, though it does require additional setup, and can get annoying if you do regular rsyncs or anything like that. Also you will lock yourself out of your server if you lose your 2FA stuff which might be more likely to happen than losing your password manager database file or cryptographic key file.
Enterprise Security: Deploying 2FA For SSH on FreeBSD with FIDO by Klara Systems
^ This video is also useful because it talks about general SSH security (in the context of an enterprise environment, but still).
I do not change my SSH port. Changing it is security by obscurity (barely), and only serves to quiet the logs. Though, given that my mailserver sees about two hosts per day trying to log in, it wouldn't even quiet the logs by much. Any remotely dedicated attacker can just run `nmap` against your server and find the open SSH port. Apparently there are scanners that build and share databases of IPs with their corresponding open SSH ports too, so it wouldn't even slow down an automated exploit attack. There is really no point in doing this.
Same goes for port knocking. Not only is it not that difficult to brute force but, if anybody is watching the traffic to your site, they can watch for the knocking sequence you're using and easily replicate it. Once again, it only serves to quiet the logs, but can pose a significant barrier to you, the administrator, if you ever have to log into the machine from a system where your port knocking is not set up. This happened to a friend of mine once, and he subsequently disabled port knocking.
I also don't always disable login to root, depending on the server. It makes little sense to have a separate user only to give them full sudo/doas access since it's trivial to escalate to root in those circumstances should that account be hacked. It only causes inconvenience when trying to administrate.
Contrary to popular belief, it also doesn't make any real difference with regards to an attacker needing to guess a username. While they wouldn't have to guess the username if it's just "root" and can focus on guessing the password or key, if the password or key will take millions of years to guess anyways, there no practical increase to security from changing the username. Plus, since people tend to use common names like "admin", "administrator", the same username they use online, or similar, it likely doesn't make a difference anyways. The username is not a secret.
It is more secure to have a non-root user you log in to, then execute `su -` to switch to the root user for admin tasks. A potential attacker would need to know the root user's password in addition to the key or password used to access the non-root user, and there is no sudo command involved which further reduces attack surface (sudo has had some pretty significant vulnerabilities in the past). This can also be inconvenient though, and just having strong authentication for the root user can be considered secure enough depending on your threat model.
However, having separate accounts with sudo/doas configured is useful in the case that you want to, for example, monitor who is running which commands if you have multiple administrators.
I do limit the host keys of the server to only the ed25519 key. This is the kind of key I use for authentication so it's the only kind of key my servers need. I don't need the others, though I doubt the effectiveness of disabling them for security.
The same goes for manually specifying the Key Exchange Algorithms and MACs since SSH will automatically use the most secure option supported. It doesn't make a difference to security if I disable the less secure ones, which are there for compatibility reasons, since I know my client and the server automatically use the best crypto available. I would also have to keep up with the advent of new algorithms and make sure my configuration follows current best practice; I wouldn't be able to rely on OS updates taking care of disabling algorithms that have just been found to be insecure. It does make sense to disable these things if you have others SSHing into the server who you want to make absolutely sure won't use old cryptography standards and you know you can keep up with advents in cryptography.
When I'm using OpenSSH on OpenBSD, I trust that the configuration is secure by default, since those folks tend to know what they're doing.
Here's a sample SSH config based on OpenBSD's default:
# Cryptography HostKey /etc/ssh/ssh_host_ed25519_key # Authentication PermitRootLogin prohibit-password AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication no # Subsystems Subsystem sftp /usr/libexec/sftp-server
Firewalls are good for controlling traffic flow. While they can provide some protection if you make a mistake, firewalls will not automatically make your server more secure. If there is no service listening on a port, your server won't be accessible through that port, regardless of whether a firewall is in front of it or not. If you run an HTTP server, and then lock down your firewall so only the HTTP server can be accessed, nothing has changed from the perspective of an outside observer.
An example of how I effectively use a firewall is to control access to a database jail in FreeBSD. Since jails can't communicate over localhost, the database is listening on a publicly-accessible IP address. I ensure that only the IPs of the jails that need to access the database can do so by using the firewall to control access. Every other host is blocked from accessing the database. (Yeah, I probably could have set up some kind of bridge interface, but this was easier.)
On OpenBSD, there is a sensible default `pf.conf`.
I avoid tools like these. They add extra complexity (extra attack surface too) and you risk locking yourself out for the purpose of, once again, quieting logs. I once tried to use it to stop a DDoS attack, but it was totally ineffective. It's even less effective these days in the age of IPv6 where everyone gets a /64 block and it's trivial and cheap to try again from a new address. It's not a security measure, only an anti-DDoS/log quieting measure, and it's not even effective against a remotely dedicated attacker.
These typically scan the network and block or alert on traffic that appears suspicious. It can be useful to detect when out-of-the-ordinary traffic is being exchanged on your network. They are prone to false positives though, and require careful tuning to be effective. For example, once I was running OPNSense with the IPS on, and it wouldn't let me download games from Steam because it identified the traffic as being malicious. They are also resource-intensive to run since they have to scan every single packet.
These systems are typically way overkill anyways for a self-hoster or homelabber and they can make diagnosing issues difficult or annoying. These are the kinds of systems that banks and other organizations that have to be compliant with security regulations use.
These systems typically work by scanning your filesystem and recording the files it sees along with a hash of the content so it can periodically check and make sure that files that shouldn't change, didn't change. This is of pretty limited usefulness in my experience, since it's trivial for an attacker to run an exploit, place files in /tmp (which you likely won't scan because of too many false-positives), and so on without the system noticing. It is also prone to false positives, and the worst thing for security is starting to ignore alerts because they happen so frequently that you get tired of them.
Most importantly, I try to use well-written and well-tested software whenever I can. Poorly written software, complex software, or software that is brand new is likely going to have way more vulnerabilities than the opposite. This is one of the reasons I prefer to use OpenBSD whenever I can.
Monitoring is pretty valuable. You should know when your server has updated its packages, if there's abnormally high CPU load, or anything like that. You can also set up something that will tell you of a successful authentication so you can be aware of any unexpected logins. You can write simple scripts to handle this and use a service like Gotify (or just email) to send alerts, or you can install a full monitoring suite like Nagios. I tend to do alert-based monitoring, as I don't care about seeing graphs of CPU usage and stuff like that.
These are my opinions based on my experience and knowledge of IT security applied to self-hosting and homelabbing. I personally choose not to do certain things because I find the additional security they provide on top of basic things like using secure credentials, good and up-to-date software, and monitoring not worth the tradeoff in terms of complexity or convenience. There are plenty of ways you can combine the various things mentioned here to make a superduper ultramega secure server, but you're not a bank or a hospital, so what are you really protecting against? Will these extra measures cause alert fatigue? Will they make administrating your server an annoying pain? Always think before you act, especially when it comes to security measures.