________________________________________________________________________________
This is very well written, but also a marvel of overengineering in my opinion. It needs to
- Set up a k3s cluster
- Set up an ingress controller
- Set up a certificate manager
- Set up a Github actions pipeline to build and push Docker images
All of that to deploy a bunch of standard applications in a single server, with very low load. So now, not only do you have to worry about the applications themselves and their configuration, but you also have to worry about all the layers on top of them. And the guide sounds good and easy when you read it, but I wonder how much time will you spend implementing it, working out issues and quirks and adding new things compared to the time you'd need doing a standard installation with a regular Ubuntu, Debian, Fedora or whatever system. And the resource overhead is important too, specially in disk space (60G in system disk for a home server sounds like a lot to me).
I don't see the advantage here, honestly. I see a lot of moving pieces and complexity added and the only thing you seem to get out of it is an automated deployment, which is not very useful for a home server and also doable with simpler tools (you could even do it with a set of basic scripts that install software and copy configuration files).
Maybe it's more secure due to isolation, but for a simple home server you can (and should) protect yourself against 99% of the attacks by using 2FA for SSH, a firewall with only the ports you want opened (and possibly restricted to certain IP ranges) plus basic auth + HTTPS on Nginx with fail2ban.
Totally agree that this is impressive, but overkill. I've been running a linux NAS with a few services (transmission, sab, nzbget, the other usual suspects) for over a decade now, starting with on-host services and later migrating to docker+compose (which greatly simplified the bootstrapping process). Recently migrated to TrueNAS to get an even more appliance-like setup, and was very impressed in how quickly I was up and running from scratch. I can have NextCloud deployed in about 10 seconds if I want. Maybe I'm just at the point where I don't want to have to tinker/micromanage my server, but the idea of running kubernetes for a home server gives me anxiety.
I run microk8s at home not because it’s easier, but because I want to retain the skills/knowledge if I ever have to work somewhere where k8s is used again. Given how ubiquitous it has become there’s a decent likelihood of that being the case if I go looking for work as a backend software engineer. It also has a couple of benefits in that if I ever decide to move my stuff to a cloud provider like DigitalOcean it’s probably going to be dead simple compared to setting up and reconfiguring something running in a VM.
Yeah, I tried swapping my home server stuff to Kubernetes and regretted it. Not only was it much more complex and indirect, but it required more maintenance. After some months on Kubernetes my home lights, which were driven by a simple daemon, stopped working. Kubernetes was on fire, and it was hours of pasting random incantations to get it back up again. I ripped out Kubernetes, set it up in docker-compose, and it's been fine since.
I’m not sure I follow. Do you mean that you use containers but without Kubernetes? If so, that sounds like a wise choice. AFAICT, Kubernetes is meant for really massive-scale deployments, so it seems quite clear to me that using it for home use is overkill.
Yes, that was my conclusion as well.
A general pattern I see in software is that it scales up much better than it scales down, and Kubernetes is definitely like that. I had hopes it would be useful and educational to run for personal projects, but it's just too heavy.
Interesting observation. I think you’re on to something there.
Kubernetes is actually a set of interfaces. In this case, they are not using default Kubernetes implementation but a simplified one : k3s so the experience could be different. Or not. But it certainly can't be judge on wpietri experience as is.
Honestly, Proxmox + TurnKeyLinux LXC for Docker + Caprover does everything I could ever want. If I need a VM, I'll just spin it up in Proxmox.
Also on Proxmox, but run separate LXCs for each docker container. Lets me do per-“application” QoS/monitoring as Proxmox is also running my firewall.
Am I missing out?
Offtopic question now that you mentioned running a Linux NAS... Which hardware are you using? Because I'm interested in building a home NAS but I don't seem to find any small box ready to hold several disk that isn't ridiculously expensive.
I run FreeNAS (FreeBSD based) on HP Microservers. ECC RAM, 4 drive bays, ...
The older Gen8 boxes Just Work.
With FreeBSD, the newer Gen10 boxes would hang the first time you boot, you need to pause it and set the hw.pci.realloc_bars loader variable to 1. Haven't tried a new install recently.
Details here:
https://www.virten.net/2017/10/fix-for-freenas-on-hpe-micros...
For the past 7 years I run Ubuntu + ZFS with a Xeon E3 + ECC RAM + SuperMicro MoBo inside a NORCO 20-bay 4U chassis. This was a great setup for years since I needed 40TB+ and at the time of build 2TB drives were the best price per gb. Now, I’m moving back from a house to an apartment, so I need something much more compact and quiet and went with a TrueNAS Mini XL, since it’s one of the few tower-like form factors with enough hot-swap bays for me. It runs nice and quiet, and comes with 10GigE. TrueNAS (the OS) has been a pleasure to use too, but it’s obviously a bit more limited than a full Linux install. I now have it loaded with 10TB drives.
ZFS is the hero here. I tried many solutions in the past (mdadm on ext, btrfs, XFS, hardware RAID) and got burned by rotten bits and confusing UX leading to user error data loss. But in the past 7 years running ZFS I haven’t lost a single bit, and been continuously impressed with the incredibly easy to use CLI and “intelligence” of the file system itself. Snapshots, send/receive, datasets, everything is just so well refined. It felt ahead of its time a decade ago, and still does today. Moving the file system from Linux to the FreeBSB based TrueNAS was effortless.
Not “small”, but I vertically mounted a 4u chassis (front-side down so cool air comes in from the bottom and hot air is exhausted out the top) and it actually feels not much bigger than a standard ATX tower, except I have 9 HD bays.
The case was $120CAD, and the mobo/cpu/ram (i3) were another maybe 300.
_Caveat: I’m not running ZFS, and just do classic “data lives in two places on-site and one off-site” backup_
I have had Fractal Design's Node 304 as a home server / NAS for the last 3,5 years. It's quite reasonably priced, the provided fans are quiet enough and can be replaced later on with Noctuas. It doesn't offer disk hotswap but otherwise it's quiet and I can suffer the downtime ;-) + can handle 6 3,5 HDDs :)
https://www.fractal-design.com/products/cases/node/node-304/...
I used Helios 4 (32TB) and currently building a Helios 64 (80TB) NAS. It works great with armbian, which was the key for me.
I'm running a Dell T140.
It's not the cheapest, but has idrac "remote control bios". So I never have to sit next to the box.
It houses 4 disks + you can add SSDs via PC and it has ECC RAM.
Plus I added a couple of years "Next business day on site" warranty, so I'm pretty safe against long downtimes.
I recently bought a Chenbro SR30169 for my backup storage. It’s super compact but also quite noisy.
Instead, I’d recommend getting a µATX tower with sound dampening preinstalled. The extra interior space also makes installation and maintenance a lot easier.
IMO K3S is quite low maintenance and easy to setup. You just need to run a one liner to install and run the cluster. My only gripes is the internal certs only has 1 year expiration and to rotate it you'll have to restart the cluster a few months before the expiration date (it won't rotate the cert unless you restart it near the expiration date). It would be great if the internal cert has long expiration time (RKE has 10 year expiry by default) or provides manual control for rotating internal certs.
until something breaks - then you are busy parsing megabytes of logs and jump from one vague github issue to another - to be fair I've attempted to install in into an lxd container but debugging this complexity monster is nightmare.
If you were just running it without k8s you would still need to generate SSL certificates and setup Nginx (aka ingress) if you wanted secured inbound traffic.
Also CI/CD is completely optional.
I'd only need to get one certificate for Nginx and another for the SMTP server, which are two commands with Lets Encrypt. And setting up nginx as a reverse proxy for an application is as easy as adding two lines to a configuration file. It's far simpler, easier and better documented, and if any problems pop up it will also be easier to debug. You only need to know nginx to solve issues and add things: with the k3s setup it looks like you need to know nginx _and_ Kubernetes.
Or just use caddy and get SSL included within seconds...
> All of that to deploy a bunch of standard applications in a single server, with very low load
> So now, not only do you have to worry about the applications themselves and their configuration, but you also have to worry about all the layers on top of them.
The biggest benefit of K3s here imho is not scaling or performance but to have a standardized API to deploy standardized packages/containers/deployments (Docker/Helm) to. So instead of configuring and maintaining one of the many Linux flavours (and all those layers) out there I have one standard system to worry about.
I have a K3s Rpi cluster running for a few months now. Setup was trivial and maintenance as well. If I have a problematic node I just remove it from the cluster, reflash the SD card with a new install of K3os and put it back in. No state or further configuration to worry about.
All my previous homelab setups where either hand-crafted snowflakes or configuration managed by some tool (Puppet, Salt or Ansible). Each comes with its own problems but they all have the problem that they accumulate state over time and become to hard to manage.
But you have a cluster, which is already different from a single node. There I see how a layer to manage several nodes starts making sense. But when you're managing a single server, you already have a standardized API to deploy standardized packages (APT/YUM + SystemD probably). It's just a different one. Of course when you deviate from that it starts getting messy, but that happens with everything.
I also have a single node 'cluster' for the more bulkier (diskio) stuff that won't fit the raspi's. And it's nice to have the same API accross all my setups. Apart from the data (backups, photos, media, etc) that is stored on the disk there is nothing of state worth saving on the node. If my root disk crashes, I just install K3s again and apply all configurations from the yaml files on my workstation and K3s pull everything back up as it was before.
As you said Linux does offer standardized packages, but they are not applications/deployments. To get them running, beyond installing the binary, still requires a lot of configuration. Nginx (proxy, tls), a database maybe, storage/LVM, firewall, etc. So you quickly run into tools like Puppet and Ansible to manage this. The disadvantage is that they don't reverse the changes they make. So if you want to try something out and deploy it with Ansible, there is no easy trivial way to undo it except for reverting all the changes individually. Also there is always the tempation to quickly tweak something by hand, forgetting to commit it to CM.
With a system like K8s, everything (container, volume, ingress) is a declarative configuration and the 'system' works to converge to the state you declared. If you delete something, K8s will revert all changes. So there will be no lingering state or configuration left. Making everything way more managable imho.
> All of that to deploy a bunch of standard applications in a single server
usage: deploy_app "commit message, push right to master!"
deploy_app () { git add --all :/ git commit -m "$1" git push ssh user@server <<EOF cd app/ && git reset --hard && git clean -df && git fetch && git reset --hard origin/master && sh scripts/deploy.sh EOF }
deploy.sh
#!/bin/sh set -e # java sh scripts/rebuild.sh java-api # node.js sh scripts/rebuild.sh api sh scripts/rebuild.sh cron sh scripts/rebuild.sh ui sh scripts/rebuild.sh windows-api sh scripts/rebuild.sh ws # terraform sh scripts/apply.sh # nginx docker restart nginx # sql sh scripts/migrate-database.sh
Rebuilds Docker images, taints the Terraform Docker resources
I have something very, very similar for a new line of business app I've just started building, except with no docker or terraform (I use docker to run a local postgres, that's it).
looks like you're using terraform to provision your docker containers locally? What is the benefit over using docker-compose? Or do you also provision other things?
I’m working on building out the same thing right now (except I’m running it on a RPI cluster in my office), and the objective is my own amusement and enjoyment, (which naturally tends to lead me toward overengineering) but also to learn Kubernetes better.
Reminds me of simpler times not too long ago, when we used to run an entire startup by simply rsyncing files to production via Gitlab CI after tests passed after every commit.
I'm doing that right now, but I can't figure out how to avoid downtime
You make a lot of great points here. Almost all of the features you call out are actually provided by an open source project called Kalm (
).
Certs, ingress, cicd integration etc. all come out of the box
"by an open source project called..." - claim it. It's your only submission on this portal anyway.
It blows my mind that people set up personal stuff like this when something like "git push heroku master" does the same thing.
For the cost of one Heroku Standard 2X worker, you can get a DigitalOcean managed k8s cluster with 5x 2GB, 1CPU nodes, and that's capable of running basically all of your hobbyist traffic - not just a single service. Heroku is nice, but it's not the same thing.
For hobbyist traffic, you don’t need a standard 2X though.
"When you try to CRAM everything (mail, webserver, gitlab, pop3, imap, torrent, owncloud, munin, ...) into a single machine on Debian, you ultimately end-up activating unstable repository to get the latest version of packages and end-up with conflicting versions between softwares to the point that doing an apt-get update && apt-get upgrade is now your nemesis."
This is not my experience.
My main house server runs:
mail: postfix, dovecot, clamav (SMTP, IMAP)
web: nginx, certbot, pelican, smokeping, dokuwiki, ubooquity, rainloop, privoxy (personal pages, blog, traffic tracking, wiki, comic-book server, webmail, anti-ad proxy)
git, postgresql, UPS monitoring, NTP, DNS, and DHCPd.
Firewalling, more DNS and the other part of DHCPd failover is on the router.
Package update is a breeze. The only time I waste the overhead of a virtual machine is when I'm testing out new configurations and don't want to break what I have.
"just having the Kubernetes server components running add a 10% CPU on my Intel(R) Atom(TM) CPU C2338 @ 1.74GHz."
Containerization is not a win here. Where's the second machine to fail over to?
Containerization and container orchestration platforms are only partly about scalability.
The primary appeal for me is ease of deployment and reproducibility. This is why I develop everything in Docker Compose locally.
Maybe the equivalent here would be something like Guix or Nix for declaratively writing the entire state of all the desired system packages and services + versions but honestly (without personal experience using these) they seem harder than containers.
I'm not deploying; this is __the server__. I do backups, and I keep config in git.
Reproducibility? This is __the server__. I will restore from backups. There is no point in scaling.
If you want to argue that containerization and VMs are portable and deployable and all that, I agree. This is not a reasonable place to do that extra work.
Hey, do whatever floats your boat. Nobody said there is a single solution to every problem.
Don't pick up a fight because you are satisfied with _your_ solution that is different from somebody else's solution.
I personally like docker-compose and Vagrant for my private services and development environments.
I use Vagrant for when I need a complete VM. Think in terms of VM for doing embedded development where I need a large number of tools in very specific versions and I need them still working in 3 years without maintenance even if I change a lot about my PC setup (I run Linux _everywhere_).
I create separate Vagrant for every project and this way I can reinstate complete environment at a moments notice, whenever I want.
I use docker-compose for most everything else. Work on an application that needs MongoDB, Kafka, InfluxDB, Graphana and so on and so forth? Docker Compose to rule them all. You type one command and everything's up. You type another and everything's down.
I use the same for my other services like mail, NAS, personal website, database, block storage, etc. Containers let me preserve the environment and switch between versions easily and I am not tied to the binary version of the Linux on the server.
I hate it when I run a huge amount of services and then a single upgrade causes some of them to stop working. I want to be able to be constantly updated and have my services working with minimum maintenance. Containers let me make decision on each of the services separately.
>_I'm not deploying; this is the server._
Err, that's the very definition of deploying. Putting stuff on "the server".
What you mean is not that you're not deploying, you're not testing/staging -- you change things and test new stuff directly in your production server.
Not sure about reproducibility.
If the HD fails, sure, restore from backup. But what if the motherboard fails, and you buy/build a completely new machine. Does a backup work then, even if all the hardware is different? That's where a container makes restoring easier.
A container does not make restoring easier in the situation you have described.
The host for the containers still needs to be configured. That's where changes to NIC identifiers, etc. need to be handled.
In my situation, the host gets exactly the same configuration. The only things that care about the name of the NIC are a quick grep -r away in /etc/; 95% of everything will be up when I get the firewall script redone, and because that's properly parameterized, I only need to change the value of $IF_MAIN at the top.
On Windows platforms that's usually true.
I've not met a linux system tarball that I can't drop on any other machine with the same CPU architecture, and get up and running with only minor tweaks network device names.
As someone who just switched motherboard + cpu on my home server, the worst thing was to figure out the names of the network interfaces.
EnpXs0 feels worse than good old ethX with interface naming based on mac addresses.
It feels worse, but you'll be even less happy when you add or remove a NIC and all of the existing interfaces get renamed.
But if you really want to, you can rename them to anything you want with udev rules.
Why wouldn't it? Unless you are changing architecture after a component of your system dies, there's no reason your old binaries would not work.
Drivers, config you missed/didn't realise was relevant/wasn't needed before, IDs (e.g. disks), etc.
Nix or aconfmgr (for Arch) help.
I still like containers for this though. Scalability doesn't mean I'm fooling myself into thinking hundreds of thousands of people are reading my blog, it means my personal use can outgrow the old old PC 'server' it's on and spill into the new old one, for example. Or that, for simplicity of configuration, each disk will be (the sole disk) mounted by a Pi.
There's more than one way to skin a cat. If you're running something as simple and low profile as OP suggested, all you need to backup from the system are the packages you installed and a handful of configurations you changed in /etc. That could be in ansible, but it could be just a .sh file, really. You'll also need a backup of the actual data, not the entire /. Although, even if all you did was backup the entire / there's a good chance it would work even if you try to recover it in new hardware.
The services metioned by OP don't need to talk to each other, they are all things that work out of the box by just running apt-get install or equivalent. You don't need anything really fancy and you can set up a new box with part of the services if they are ever taking too much resources (which, for a small setup, will likely never really happen. At least in my experience)
> Does a backup work then, even if all the hardware is different
Full disk backup, Linux ? Most likely. We rarely recompile kernels these days to tailor to some specific hardware, most are supported via modules. It could be that some adjustments are going to be necessary (network interface names? nonfree drivers). For the most part, it should work.
Windows? YMMV. 10 is much better than it was before and has more functional disk drivers out of the box. Maybe you need to reactivate.
The problem is mostly reproducibility. A system that has lived long enough will be full of tiny tweaks that you don't remember about anymore. Maybe it's fine for personal use but it has a price.
Even personal servers (including Raspberry Pis) I try to keep some basic automation in place so if they give up the ghost, they are cattle. Not pets.
Why do you feel the need to keep config in git if you've got backups? I think the answer to that is the same reason that I'd rather keep a record of how the server is customised than a raw disk backup.
I do think containerisation and VMs are more overhead than they're worth in this case, but there's definitely a lot of value in having a step-by-step logical recipe for the server's current state rather than just a snapshot of what's currently on the disk. (I'd favour puppet or something similar).
I keep config in git so that when I screw up, I can figure out how. What else would I be doing? Merging conflicts with my dev team?
> I keep config in git so that when I screw up, I can figure out how.
Right. Which is exactly why I want installing a new service, upgrading a library etc. to be in git rather than just backing up what's on disk. A problem like not being able to connect to MySQL because you've upgraded the zoneinfo database, or the system root certificates, is a nightmare to diagnose otherwise.
> Reproducibility? This is the server. I will restore from backups.
To me, reproducibility is more than about restoring the old bucket of bits I had. It's about understanding, about being able to reproduce the means that a system got the way it is.
With Kubernetes, there is a centralized place where where the cluster state lives. I can dump these manifests into a file. The file is human readable, well structured, consistently structured, uniformly describes all the resources I have. Recreating these manifests elsewhere will let me reproduce a similar cluster.
The resources inside a kubernetes cluster are just so much easier to operate on, so much easier to manage than anything else I've ever seen. Whether I'm managing SQS or Postgres or Containers, being able to have one resource that represents the thing, having a manifest for the thing, is just so much more powerful, so much better an operational experience than either having a bucket of bits filesystem with a bunch of hopefully decently documented changes over time on it, or a complex Puppet or Ansible system that can enact said bucket of bits. Kubernetes presents high level representations for all the things on the system, of all shapes and sizes, and that makes knowing what I have much easier, and it makes managing, manipulating, replicating those resources much much easier & more straightforward.
Wrapping a new abstraction layer around a single server does not help, it is an expense you do not need. "Recreating these manifests elsewhere" will not work, because there is no elsewhere.
You cannot add complexity to a system to make it simpler.
You cannot abstract away the configuration of a system when there is only one system: you must actually do the configuration.
There is no point in having a high-level representation of all the things on the system: you have the actual things on the system, and if you do not know how to configure them, you should not be running them.
> There is no point in having a high-level representation of all the things on the system: you have the actual things on the system, and if you do not know how to configure them, you should not be running them.
> You cannot abstract away the configuration of a system
I've spent weeks setting up postgres clusters, with high availability, read only replicas, backups, monitoring, alerting.
It takes me 30 minutes to install k3s, the postgres operator, & recreate that set up.
Because there are good consistent abstractions used up & down the Kubernetes stack. That let us build together, re-use the deployable, scalable architectures of lower levels, across all our systems & services & concerns. That other operators will understand better than what I would have hand built myself.
> ecreating these manifests elsewhere" will not work, because there is no elsewhere.
It'll work fine if you had an other elsewhere. Sure backups don't work if you have nothing to restore on.
Dude this is such a negative attitude. I want skepticism, criticalness, but we don't have to hug servers so close forever. We can try to get good at managing things. Creating a data plane & moving our system configurations into it is a valid way to tackle a lot of management complexity. I am much relaxed, many other operators are, and getting such a frosty negative "nothing you do helps" dismissal does not feel civil.
> _This is why I develop everything in Docker Compose locally._
For a small setup like this, just having a docker compose file in version control is more than sufficient. You can easily leverage services someone else has set up, and the final config is easy to get going again if you need to rebuild the machine due to hardware failure.
Exactly! Other non-scalability concerns they address (specifically talking about Kubernetes here) is a primitive amount of monitoring/observability; no-downtime updates (rolling updates); liveness/readiness probes; primitive service discovery and load balancing; resiliency to any one single host failing (even if the total compute power could easily fit into a single bigger server).
Which of these are things you want on your house server? That's what the article author is writing about, and what I am writing about.
I do not need an octopus conducting a herd of elephants.
I can agree that the idea of reaching for Kubernetes to set up a bunch of services on a home server sounds a bit absurd.
"How did we get here?"
I'm not an inexperienced codemonkey by any means of the term, but I am a shitty Sysadmin. And despite being a Linux user from early teens, I'm not a greybeard.
As sorry a state as it may sound, I have more faith in my ability to reliably run and maintain a dozen containers in k8s than a dozen standard, manually installed apps + processes managed by systemd.
Whether this is a good thing or a bad thing you can likely find solid arguments both ways for.
You only have to learn 1 interface (albeit a complicated one) to use Docker/k8s compared with 1 interface per service to run them manually.
Hm, these days I feel like I only have to learn systemd. Reload config? View logs? Watchdog? Namespaces? It’s all systemd. If you are running on one machine, what does Docker/k8s give you that you do not already have?
> If you are running on one machine, what does Docker/k8s give you that you do not already have?
That feeling that you are part of a special, futuristic club.
Nothing, but its pretty common to have the home server plus a desktop/laptop where you do most of the work (even for home server), that may not be linux - in which case containers are the easiest way
I recently ended up setting up "classic" server again after a significant time keeping mostly containerized infrastructure on k8s.
Never again, the amount of things that are simply _harder_ in comparison is staggering.
Sorry to sound pedantic, but what was harder? Containerized infra or a classic server? I assume the former but wanted to be sure.
"Classic" approach turned out to be maddeningly harder.
Everything, even inside single "application", going slightly off the reservation. Services that would die in stupid ways. Painful configuration that would have been abstracted out were I running containers on k8s (some benefits might be realized with Docker compose, but docker on its own is much more brittle than k8s).
So much SSH-ing to a node to tweak things. apt-get fscking the server. Etc.
Oh, and logging being shitshow.
To me it sounds like the latter, a classic server, which I agree... After getting comfortable with containerized deployment, "classic" servers are a huge pain
it honestly sounds just like you don't know what you are doing and docker was made for you
Fair enough, my point was more about using k8s to deploy applications rather than “house server” stuff, where it’s indeed unneeded more often than not.
Having zero downtime updates is quite nice. For example, I can set FluxCD to pin to a feature release of Nextcloud, and it will automatically apply any patch updates available. Because of the zero downtime updates, this can happen at any time and I won't have any issues, even if I'm actively using Nextcloud as it's happening.
Nix/NixOS for this purpose is very nice.
It is because at the time I was doing a lot of python development, and I was (and still) using my server as a dev workstation.
Isolation with virtualenv was not great and many projects were needing conflicting versions of system package, or newer version than what Debian stable had.
Lot of the issue was me messing around \o/
"just having the Kubernetes server components running add a 10% CPU on my Intel(R) Atom(TM) CPU C2338 @ 1.74GHz." Containerization is not a win here. Where's the second machine to fail over to?
I think it is worth it in order to get a centralized control plane for everything and automatic build and deployment for eveything.
But I agree with you, some apps (postfix, dovecot) doesn't feel great inside a container (Sharing data with UID issue is mewh, postfix with multiprocess design also...)
I just wanted to have everything manage into containers, as they were the last ones, so I moved them into.
_> I was (and still) using my server as a dev workstation_
This seems like a very bad idea, and I'm not at all surprised you had problems. But it doesn't look like the problems were with the server part; if your machine had only been a server you could have avoided all the stuff about needing to pull from unstable. So I don't think "don't put all the server stuff on one machine" is the real takeaway from your experience; I think the real takeaway is "don't use the same machine as both a server and a dev workstation".
Well, at that point you just move the problem from "how to manage the home server" to "how to manage the dev workstation". You need _somewhere_ where you can install not just random Python packages but also random databases, task queues etc. during development. I guess "accept that your dev box will always be flaky and poorly understood, you'll have to spend time productionising anything before you can deploy it anywhere else, and if you replace it you'll never get things set up quite the same" is one possible answer (and perhaps the most realistic), but it's worth looking for a better way.
_> at that point you just move the problem from "how to manage the home server" to "how to manage the dev workstation"_
No, you separate it into two problems that are no longer coupled to each other. The requirements for a server are very different from those for a dev workstation, so trying to do both on the same machine is just asking for trouble.
_> You need somewhere where you can install not just random Python packages but also random databases, task queues etc. during development._
Yes, that's what a dev workstation is for. But trying to do that on the same machine where you also have a server, which doesn't want all that stuff, is not, IMO, a good idea.
_> I guess "accept that your dev box will always be flaky and poorly understood_
It will be as flaky and poorly understood as the code you are developing and whatever it depends on, yes. :-)
But again, you don't want any of that on a machine that's a server. That's why it's better to have a server on a different machine.
The biggest objection in this thread is to the 10% overhead of containers, so it seems strange to see the 100% overhead of two separate computers as a better solution.
And at some point the code has to go from dev code to production code. If you're managing dev and production in different ways, then you're going to have to spend significant time "productionising" your dev code (listing dependencies in the right formats etc.). And the bigger the gap between the machine you develop on and the machine you deploy to, the higher the risk of production-only bugs. So keeping your dev workstaion as similar as possible to a production server - and installing dependencies etc. in a way that's compatible with production from day 1 - makes a lot of sense to me.
We seem to be talking about different kinds of servers. You say:
_> at some point the code has to go from dev code to production code. If you're managing dev and production in different ways, then you're going to have to spend significant time "productionising" your dev code_
This is true, but as I understand the article we are talking about, it wasn't talking about a dev workstation and a production server for the same project or application. I can see how it could make sense to have those running on the same machine (but probably in containers).
However, the article was talking about a dev workstation and a home server which had nothing to do with developing code, but was for things like the author's personal email and web server. Trying to run those on the same machine was what caused the problems.
I presume what the author is developing is code that they're eventually going to want to run on their home server, at least if they get far enough along with it. What else would the end goal of a personal project be?
Reading this chain, you seem to want it both ways, that a Dev machine runs unstable config and is in an unknown state due to random package installation, but the same machine should be stable and reproducible.
Yes, that's exactly why the OP's approach is appealing! I want it to take minimum effort to install some random new package/config/dependency, but I also want my machine to be stable and reproducible.
_> What else would the end goal of a personal project be?_
Um, lots and lots of possibilities?
It is a bad idea, but my third web development job, my workstation was also the company’s secondary public DNS server, so I’ve seen worse things.
Wow! More stories about _this_ place, please...
It was a magazine publisher in DC, they transitioned into web development for their print customers.
I was told to never turn my workstation off.
> "When you try to CRAM everything (mail, webserver, gitlab, pop3, imap, torrent, owncloud, munin, ...) into a single machine on Debian, you ultimately end-up activating unstable repository to get the latest version of packages and end-up with conflicting versions between softwares to the point that doing an apt-get update && apt-get upgrade is now your nemesis."
I use Proxmox to avoid that. Some things I run in VMs (often with Docker containers), some other things I run them in LXC containers (persistant containers that behave like VMs).
I can then automation (mostly Proxmox templates and Ansible) to make deployments repeatable.
I'm interested in k3s, though, I'll give it a better look :)
The next addition will be some form of NAS, either a qnap/synology or a custom build using FreeNAS or Unraid (probably FreeNAS).
> _"The only time I waste the overhead of a virtual machine is when I'm testing out new configurations and don't want to break what I have."_
So what happens when it does break? What do you do to fix it or do you just skip the update?
This is one of the things that containerization solves, amongst the benefits described by others here.
Installing Docker, creating volumes, and running/updating images are all single line commands, but with much better isolation and portability.
What do I do to fix it?
I earn my pay as a sysadmin, of course.
"This is one of the things that containerization solves"
No. Containerization does not fix a broken system. Only fixing the broken system does that. Containerization lets you apply your fix to all the hosts that you want fixed, and as we have thoroughly established, the number of those hosts in this scenario is one.
So far I have been told that containerization fixes configuration problems and allows multiple services to be configured the same way. No container will fix a typo in /etc/dovecot/config.d/20-imap.conf, and no container management system will make nginx.conf look like sendmail.cf.
"running/updating images are all single line commands"
There is some kind of elven glamour being cast over Kubernetes, Docker, and other container/VM/serverless systems that confuses people about the difference between configuring a service to be useful and managing the lifecycle of that service over a scalable number of machines. Docker cannot update an image that you have not already fixed.
This reminds me of the MBA illusion, that claims that all management can be performed most efficiently by a management specialist with no particular knowledge or skills in the actual production process; all of that is irrelevant detail for somebody else to do.
I assure you that detailed understanding is the sine qua non of getting things done.
If you're concerned about resource usage, at least with Swarm or Docker Compose, you get things like health checks, restart policies and replication for free with minimal overhead. Scaling horizontally is easy, too.
It's really nice to have your infrastructure described as code versus in some configuration management tool or shell scripts, and for that reason alone I'll use it even on a single machine.
What do you do as mitigation if one application has a remote execution? I use containers as a security layer.
Containers have a pretty poor security record. Frankly I'd feel more safe with a non-containerised service running under a non-root user than with a service running as root in a container, not that I'd feel particularly safe with either.
Containers != Docker.
Podman supports rootless containers for quite a while, without a daemon, and it is compatible with Kubernetes.
> Containerization is not a win here. Where's the second machine to fail over to?
It would probably take an hour or less to add another node to this set up. The author made some choices that block scaling, & changing that, installing service-lb, local-provisioner, would take a bunch of that time. Finding a resilient replacement for local-provisioner is something I wish we were better at, but the folk at Rook.io have a pretty good start on this.
To me, the real hope is that we move more and more of our configuration into Kubernetes state. Building a container with a bunch of baked in configuration is one thing, but I hope we are headed towards a more "cloud native" system, where the email server is run not as containers, but as an operator, where configuration is kept in Kubernetes, and the operator goes out & configures the containers to run based on that.
I agree that running a bunch of service on a Debian box with a couple different releases (testing/unstable) pinned into apt is not really that hard. But I am very excited to stop managing these pets. And I am very hopeful that we can start moving more and more of our configuration from /etc/whateverd/foo.conf files into something centrally & consistently managed. The services themselves all require special unique management today, & the hope, the dream, is that we get something more like big cloud style dashboards, where each of these services can be managed via common Kubernetes tools, that apply across all our services.
"But I am very excited to stop managing these pets."
When you have a herd of cattle which is of size 1, it's a pet. You don't get any efficiencies from branding them all with laser-scannable barcodes, an 8-place milking machine, or an automatic silage manager. You still need to call the vet, and the vet needs to know what they are doing.
Having a consistent, managed experience with good top down controls is, in my world, far more efficient than tackling each service like a brand new problem to manage & operate independently.
I utterly fail to understand you. Please explain how configuring postfix is made easier by having a container.
You listed 21 different pieces of software, 21 different needs in your post.
For some reason almost everyone commenting here seems to think it's totally unreasonable to try to use a consistent, stable tool to operate these services. Everyone here is seems totally convinced that, like you, it's better to just go off & manage 21 services or so independently, piece by piece, on a box.
If it were just postfix, fine, sure, manage it the old fashioned way. Just set up some config files, run it.
But that's not a scalable practice. None of other 20 pieces of software are going to be managed quite like that. Tools like systemd start to try to align the system's services into a semi-repeatable practice, but managing configuration is still going to be every-service-for-itself. Trying to understand things like observability & metrics are going to be highly different between systems. It seems so past due that we start to emerge some consistent ways to manage our systems. Some consistent ways of storing configuration (in Custom Resources, ideally), of providing other resources (Volumes), of exposing endpoints (Endpoints). We can make real the things that we have, so far, implicitly managed & operated on, define them, such that we can better operate on them.
It's not about containers. It's about coherent systems, which drive themselves to fulfill Desired State. Containers are just one example of a type of desired state you might ask for from your cluster. That you can talk about, manipulate, manage any kind of resource- volumes, containers, endpoints, databases, queues, whatever- via the same consistent system, is enormously liberating. It takes longer to go from zero to one, but your jump from one to one hundred is much much smoother.
> but managing configuration is still going to be every-service-for-itself. Trying to understand things like observability & metrics are going to be highly different between systems
Literally none of this matters for a home server. I have a mail/web server that I haven’t had to change the configuration on since I last setup letsencrypt like 4 years ago. I don’t check metrics or have observability other than “does it work” and that does fine.
You’re caught up sucking in a bunch of technical debt preparing for something that simply doesn’t matter.
it takes less time to set up k3s & let's encrypt than it does to diy, under 30 mimutes.
for some people perhaps diy everything is a win, makes them feel better, but I intend to keep building, keep expanding what I do. having tech that has an actual management paradigm versus being cobbled together makes me feel much better about that future, about investing myself & my time, be it a little bit of time, or more.
i've done enough personal server moves to know that the old school automation i had, first puppet, then ansible, is still a lot of work to go run & coax back into action. but mostly, it just runs, leaves me with a bucket of bits, doesn't help manage at all.
> simply doesn’t matter
lot of ways to think about our cputomg environments and I am not in the "simply doesn't matter" camp.
maybe that applies to lots of people. they should take a spin at Kubernetes, i think it'll do an amazing amount of lifting for them & you can be up & running way faster.
Sounds like a dream to compromise! Rather than one attack surface, there's like 25. Crack one and you take the lot.
Plus the additional huge attack surface of docker, docker hub and k8s
It's a home server, not an enterprise network. Security is a trade-off.
I would assume the administrator cares enough about their own data to want to do it properly.
It provides 25 services. If I were going to deploy 25 machines to provide 25 services, that would not reduce the attack surface... of my house server.
With no segregation, you crack one service you get access to all 25 services and their data.
With segregation, you crack one service you get access to just that container/physical service.
Linux has had support for multiple users for quite some time now. Popping a standard service process doesn’t get you any more than the privileges of the running user, which is usually scoped to that service alone.
Install Ubuntu, Postgres+apache2, then su to www-data and try read some data from the postgres data directory to see what I mean.
I don't believe it, if you have postgresql package update can't be a breeze. Every major version you need to manually convert the database
You'd need to do that with a container too, if the volume mount for /var/lib/posgresql/data still has the older version of posgres data then if you update the container then that needs to be done too. Alternatively a dump and reimport.
I _hate_ this about postgresql. Why is it so? It's extremely annoying, imo, and it holds me back from upgrading
I'm surprised no one mentioned using NixOS[0] for that kind of personal server thing, which is how I discovered it myself.
Setting up unnecessary "control plane" daemons and services in an awful indent-nightmare Yaml DSL feels so clunky and error prone compared to writing a few hundred lines of Nix, which reproducibly builds your entire server image either live or on a boot medium. The result image can also be tested by launching a qemu VM with no additional code.
The language itself provides some amount of syntactic validation. Thousands of fairly up-to-date packages[1] readily available and pre-compiled (but you can also build them if you prefer). Most sysadmin tooling already comes with specific, type-safe config parameters. Just override the defaults you don't like. Let's Encrypt support for an nginx vhost is a single "enableACME = true" line!
While there are still some rough edges like secret sharing/storage, I'd encourage giving it a try before getting to the big, unnecessary guns.
[0]
[1]
https://search.nixos.org/packages
I know it's popular to hate on K8s and containers as overkill for personal use and that they add tons of complexity. But for those of us who live and breathe Kubernetes every day professionally, it's honestly easier to do things this way.
K3s gets you setup in minutes and then you can use kubectl + Helm to setup everything you want.
Would you recommend k3s, kubectl, and helm for managing a single VPS? Asking for a friend...
I actually do this on my small vps (on dedicated servers I usually uses RKE). k3s + nfs server so I can use nfs volume mounts. Like other said, k3s consume ~500mb of ram, so keep that in mind that if your vps has small amount of ram. Idle cpu usage is pretty small though, I only see ~5% or less on my small vps. Don't use traefik because the documentation sucks for now, v1 and v2 has different configuration and google often return results for v1, making things harder.
I can't confirm this. I use traefik (+ Docker) for almost everything now. First thing I do with a new server is setup traefik as a systemd service. Everything runs smoothly and I didn't have any issues with the documentation. In fact, when I first tried it I was surprised because it just worked. It worked instantly. No googling, no weird errors, it just worked. Once you have a good understanding of how it works, it's just easy. I now have my default configuration which I can use almost everywhere. Also, automatic Let's Encrypt certificates are a big +
For use with vanilla docker and docker compose, traefik is absolutely convenient, though the issue with googling v1/v2 docs remains and will bite new user. But for use in kubernetes, I think sticking with nginx ingress and cert manager is better for new users as they basically provide similar fictionally (traefik is not marginally easier to use here) and there are myriads of resources on the internet available on how to use them.
I spent waaaay too much time debugging configuration because I was running v2 but Google results gave me v1 options...maybe would have been a good idea to warn the user if they're using the old configuration format (afaik I didn't ever see a warning in logs).
The only downside is that k3s takes almost half a gig of ram. That does bump up the minimum VPS side a bit. It is overkill, but, in my mind, it is wonderful & greatly liberating overkill.
It feels a little silly setting up a webserver the first time with a 45 line yaml file, but as you continue to make your way in the world, kubernetes is just such an amazingly pleasant top-down way of controlling systems, where it's so easy to see what you have, to see what is behaving and what isn't, and it's so easy to talk & share & communicate with other people about what you are doing versus trying to understand some handjammed nonsense a previous sysop scratched together 4 years ago.
I'm rapidly planning out how to extend my Kubernetes life further. I've spent years writing Ansible scripts to automate setting up workstations, but I'd really like to switch over the bulk of the things I rely on to being more Kubernetes based. For example, I run prometheus's node-exporter to monitor the health of my systems. But if the config changes, or if I add a plugin, or if I start using a new laptop, I need to go re-run the script, and any other scripts. With Kubernetes, I can create a daemonset and it's fire and forget. It'll run on every node, and any changes I make will get applied to all nodes, whenever the node comes online or joins. Also my prometheus server wont need to be manually updated to point to these nodes when they get added.
Quick note: K3s embed kubectl and helm in it; the install script symlinks helm and kubectl to the k3s binary. Also crictl for the container runtime tool. If you ask for k3s, you get these other things.
This particular problem you mentioned can easily be solved in ways that do not involve Kubernetes, for example, having a linux image w/ prometheus daemon running and some sort of service discovery (consul) that prometheus hooks to to discover the infrastructure.
Kubernetes is more all encompassing than the alternatives. I see no incentive to go about cobblimg together specific tools to get one capability, when I can start using a much more powerful consistent cloud architecture to manage my everything.
I have preferred smaller tools that do one thing only and are easy to manage / break in predictable ways.
An opinion we see thoroughly, adamantly expressed throughout the comments on this posting. Everyone seems very opposed.
It seems odd to me, because Kubernetes is so new, and there have been few attempts to manage things consistently & well, under one umbrella, that have happened. Certainly none have gotten anywhere near this successful.
I think there's a lot of value to trying to find core abstractions & use them throughout. Managing each resource via totally separate tooling is how we've always done things from an ops perspective, but the coders have been trying to distill & create enduring shareable value for a long time, find ways to share value. It's weird to me how resistant, how much people think they know what their opinion is, how certain they are that only small specific tools can help them, and how convinced they are that bigger attempts are going to be hard to manage, or convinced that they won't be able to see or understand breakage. I don't think we have the experience to know, yet people seem deeply committed already.
For those of you hitting ctrl-f "postfix": Postfix is very much not designed to run within a container. Unless you can absolutely 1000% guarantee that you will only ever have one instance of the postfix container running, you _will_ get data corruption and it is _very_ likely that you will lose email, because postfix does not support multiple instances sharing the same data/queue directory.
A ReplicaSet, as given in the OP's yaml, does not suffice here because it is not guaranteed that the failing pod (which triggers a new pod to start) doesn't come back to life. That will almost guarantee your postfix installation to break especially as one postfix instance starts cleaning up files possibly during the time period where another postfix instance hitting the same directory is trying to access them.
On edit: yes, Postfix 3.3 can be run as a container process. What I mean is running it within a standard container environment where you would expect multiple instances to run for failover etc.
It's honestly not even that hard to run a full blown k8s cluster, I've been doing it for years, and even though I've made my share of mistakes along the way and downtime, I'm no longer scared of the k8s complexity boogeyman. Learn how it works from first principles, and the magic will disappear -- errors can still be daunting, but if you follow the concepts something normally jumps out at you. More than anything what gets people is the hypetrain and quickly moving libraries that just don't follow the "don't break user space" mantra these days.
k8s is complex, but the complexity is for the most part essential. Just about every kubernetes concept maps to a thing _you'd have to handle yourself_ if you ran one or more robust personal servers. It is a simple control loop, that you can run on only one machine -- one bit to check what should be running (etcd + apiserver), one bit to run the things (kubelet), and some bits to make sure they stay running (controller-manager and the other controllers).
It is certainly more complex than Dokku[0] or Caprover[1] but that's because those are very focused on running applications -- Kubernetes aims to run more things, and for more use cases, but you don't have to _know_ about all the use cases, just the ones that matter to you.
All that said, I've been working on an orchestrator that simpler to operate than kubernetes, because I think kubernetes stumbled upon it's most powerful feature (operators) too late, and the answer to simplicity might actually be breaking kubernetes apart _even more_.
[0]:
https://github.com/dokku/dokku
[1]:
https://github.com/CapRover/CapRover
LOL. I want to go back 10 years then. Or just use dokku. We've gone crazy guys.
I actually still use dokku for my side projects and it works great. Very little learning curve, minimal headaches. Sure it doesn't scale out but that's not been a problem so far.
Dokku scales fine, just resize the VM to something bigger.
I have a looong way to go before my 15 side-projects need a server that costs anything over $20/mo.
Yes it scales "up" which has been sufficient for my needs, but not "out" (i.e. to multiple app servers, without a lot of manual work on top).
Dokku is fantastic for deploying web apps. I use it to deploy sideprojects and it works perfectly.
For my home server, I use Docker with Compose, though I do find myself wishing for something a tiny bit more turnkey. Not too turnkey, just something that would allow me to push to a repo and have that be autodeployed with Compose.
This is the same process I use. I’m still annoyed at the lack of simple deployment systems which use Compose, so I wrote my own:
.
What seems crazy to me is that, a decade or two ago,
* We had few standard ways of doing things. Everyone was cobbling together their own stack, figuring out their own ways to run a plethora of services needed to run & operate an internet connected box.
* Each service or daemon needed to run your systems had it's own stand-alone management interfaces & systems. Nothing worked with anything else. There was no apiserver there to store the configurations you wanted. You had whatever scripts you wrote, checked in to source, then a big "do the scripts" button, and then you got a bunch of files written all over kingdom come to various hosts to do all the various tasks that supposedly would run your system, you hoped.
* People with bare-metal had few tools to auto-scale or get resiliency for their systems. If you do have three machines, most operators managed them like pets, not cattle.
Ten years ago was 2010. VMs had already taken over everything by then. Some were convinced that VM images would take over application distribution, because it solved all dependency problems. Others weren't convinced and pointed to problems with insight and integration.
Resiliency or scaling wasn't any easier or harder than now. It's not 1985 we're talking about.
> * We had few standard ways of doing things.
We still have few standard ways to do things. Yeah, docker. It's a thing.
Then add kubernetes, you're really back to ten years ago. Unless you're running either stock GKE/EKS/AKS (thee different variants already), you don't really have a standard.
Very nice post indeed, even though I chose years ago to stop at just running Debian stable, for personal stuff.
Someone, maybe 10 years ago, said that at scale, one should treat servers like cattle not pets. At home, I feel it’s OK to treat your server as a pet. One self hosted server that does it all is not a 2020 anomaly, it’s just boring and effective :)
I have spent a small part of my imposed 2020 free time simplifying my family digital cattle to a cow self hosted digital pets. I liberated myself both from gear and services I was maintaining for affective value or some professional distorsion; the openbsd systems I love but don’t use enough, the apple rmbp I never use, the VPSes for my personal services, ... donated, sold, resigned. It feels great.
why not a pet-cattle? you should be able to turn a pet into pet-cattle with ansible easily.
I actually removed ansible from my setup as I got rid of redundancy.
Good backup strategy and unattended-upgrades is all the automation I need. For my personal systems.
From the K3s webpage:
"We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. There is no long form of K3s and no official pronunciation."
"ssh ${HOST} '...; curl -sfL
| sh -'"
Why do we blindly run shell scripts from a website? I see this far more often than I should...
As opposed to blindly running the precompiled binary you manually downloaded from the site?
You're right that from a purely security-oriented point of view there's not much difference. All code that you didn't write yourself can theoretically do anything it likes to your computer. (Unless sandboxed, of course.)
But from a practical point of view in the context of convention, expectation, and trust, curlpipes and stand-alone binaries are totally different.
Some of us have the battle scars of terrible shell-script installers gone wrong. At best, poorly-written shell script can deploy cruft to strange places on your disk or interfere with data and software managed by the OS package manager. At worst, such scripts have been known to destroy data and render the whole system a brick because the author was not sufficiently familiar with all of the systems that it might be run on. I don't remember the source now but one such installer effectively `rm -rf /` on the user's machine when a certain variable ended up undefined due to a bug elsewhere in the script.
However, when I download a binary executable from a source that I trust, there's a certain expectation that the program will keep to itself and not interfere with the rest of my system or home directory beyond its own data and config files. Because _not_ doing so departs from convention and will result in a lot of angry users. Likewise I place a great deal of trust in the folks who package software for my OS because while mistakes can happen, their whole goal is a reliable and consistent system as a whole.
Individual app developers care little about my system, they just want users to install their stuff. Curlpipes _look_ attractive to them ("just paste this into your terminal, type in your root password and you're ready to go!"), but we've seen many times over that cross-platform shell script installers are rarely their core competency.
The rm -rf instance you're talking about I believe is from the Steam Linux client [0]. Which is...ironically part of the Steam binary.
[0]:
https://github.com/valvesoftware/steam-for-linux/issues/3671
I'd assume that many people download and then check if the MD5/SHA hash matches the published one (e.g., search the web for the hash and see if the results look good).
Harder to do that in automated deployments that are supposed to always use the newest version available though.
When you try to CRAM everything (mail, webserver, gitlab, pop3, imap, torrent, owncloud, munin, ...) into a single machine on Debian, you ultimately end-up activating unstable repository to get the latest version of packages and end-up with conflicting versions between softwares to the point that doing an apt-get update && apt-get upgrade is now your nemesis.
Has anyone here taken a look at bedrock linux? It lets you have multiple linux installations coexist and interoperate (different distros mainly, but probably different copies of debian is possible)
I've been fascinated by it but never actually given it a try.
From the bedrock linux introduction page:
Given someone already expended the effort to package the specific version of the specific piece of software a given user desires for one distro, the inability to use it with software from another distro seems wasteful.
Bedrock Linux provides a technical means to work around cross-distro compatibility limitations and, in many instances, resolve this limitation.
https://bedrocklinux.org/introduction.html
Bedrock Linux does indeed support workflows with multiple copies of Debian, and this does in practice often resolve the quoted concern of mixing parts from different Debian releases without having them step on each other problematically. Another use for having multiple Debian subsystems (what Bedrock calls "strata") is to serve as a low risk alternative to an in-place dist-upgrade: If anything doesn't work as desired in the new Debian release, just keep using the previous one until you get it working; once everything's migrated over, you've effectively dist-upgraded and can (if you wish) remove the old stratum.
I'm Bedrock's founder and lead developer. If you have any questions about it I'm happy to answer them.
For my personal home server, I bought a Synology NAS which provides me with file sharing (via Samba), a built in VPN server, automatic RAID, easy dynamic DNS, Plex server, and support for Docker.
Who needs Kubernetes (however stripped) down, when you're running a single-machine "cluster"? I write all of the services I want running as Docker containers and use a single docker-compose.yml to bring them up/down as needed. Specify `restart: unless-stopped` for each container and bam it's self healing too.
I understand the desire to tinker and learn. And still do a lot of it. But for my household infrastructure, I want it plug & play. A couple of Synology NAS, one for storage, one for backup & failover, a hardware firewall/reverse proxy/pihole container/wireguard container on a separate, hardened box. I sleep well at night and I don't have to explain to the wife why email/calendar/file sync/backup/notes/etc/etc isn't working today.
I won't say it is for everyone, and if the OP is happy with his setup, so be it, but I wouldn't say it is a path that most should wander down if they place any kind of value on their personal time.
A refreshing and straightforward, opinionated writeup.
Cuts through the noise of the big cloud providers, who are ironically incentivized to keep things pretty complicated.
Traefik isn't that complicated though, definitely worth learning.
> Traefik isn't that complicated though, definitely worth learning.
Their documentation – specially for K8s – sucks ever since they bumped a major version. Lots of links to the old documentation that no longer work. Sometimes there's no documentation at all (middleware configuration on K8s, for instance).
Caddy inexplicably removed their v1 docs too. They provide a zipfile with the old docs site that you can download and browse locally, but I have no idea why they didn't make it browseable.
I asked Matt Holt but got no response, so I deployed the docs myself:
https://caddy-docs.netlify.app/v1/
I use Debian stable and systemd-nspawn, gives me the "virtual machine" experience (separate filesystem, init/systemd, network address, etc) via lightweight containers that are really easy to start, stop, and share files between. All managed by ansible. Once a month I bump versions, run ansible, and forget about it.
I really wanted to like systemd-nspawn, but ran into massive issues with poor to non-existent documentation, bugs (in particular with DNS between 'containers'), and usability issues.
Also the inability to reasonably run non-systemd distros such as Alpine further killed my interest in it. Even distros like Ubuntu, which use systemd had to be modified to use the systemd network stack in order to function properly.
Feel free to message me over email if you want some help (in my profile). I can share more details on my setup.
_> Warning With everything installed, just having the Kubernetes server components running add a 10% CPU on my Intel(R) Atom(TM) CPU C2338 @ 1.74GHz_
Why is it that Kubernetes uses this much overhead? Surely it should be sleeping most of the time and occasionally reacting to events?
Is the open source Go runtime less efficient than Google's internal one perhaps? (
https://www.theregister.com/2020/08/10/google_scheduling_cod...
)
Probably the old atom cpu. I got ~5% or less during idle on my vps server with k3s, and that's with various stuff running (nextcloud, postgres, postgis, mysql, pgadmin, phpmyadmin, wordpress, redis, memcache, several django apps + celery broker, etc).
On the other hand, running kubernetes on mac immediately hog ~20% cpu, killing any hope for normal battery life.
I'm using my old home PC as server an K3s barely adds CPU load. I assume it's the 7 years old Atom processor.
The memory memory footprint of K3s is recognizable on my 16 Gigs of RAM though.
When I looked at the shutdown procedure for k3s I ran away in horror: it's basically firing off kill signals left and right and leaves all kinds of stuff laying around like bridges and temp state, making it hard to restart it if you don't want to reboot the server. The net result was the shutdown procedure for the server does not complete. As far as I can tell, this is a bit inherent to Kubernetes: it wants to be high-available and doesn't like shutting down the last node in the cluster, which is what a home server is.
Edit to add:
I'm having great fun with Podman instead. I don't need all the orchestration stuff or the autoscaling: one of each service is fine for a home server. Systemd can be the supervisor. I did a little fiddling to get service discovery with dynamic DNS, this could be improved. Using static routes on the router, I can route right into the right container network. This too could be improved: with IPv6 prefix delegation I could potentially route into the container from anywhere in the world.
I'll never understand people who run kubernetes, or even docker for that matter, on their personal servers
I don't even understand 99% of the people who do it for their business workloads, but who am I to judge.
Job security?
I never understand why people quickly judge/look down upon people who just want to try new tech, or even use something because they want to.
Docker compose is too dang easy to use. It took me like an hour to get all the services like I wanted them, and it’s reasonably portable if I ever need to move the services to a different box. And I’ve had stuff break on apt upgrade before, but I haven’t had trouble with updated containers (if I build them or get them from hub).
K8s at home would make me wake up in a cold sweat at night to make sure I didn’t break something though. That’s way too much.
For fun?
The only answer I would accept! :D
I do use Docker on my personal server so that I don't have to struggle with getting compatible dependencies between my local dev machine and the server.
Additionally, the Docker daemon can be figured to auto-restart containers if/when they crash. I use a single docker-compose.yml to bring up the containers I want and to not need to configure each application individually.
I use Docker + Traefik and it's just a heaven. No nginx configurations, no cert generation, renewing or configuration of an auto-renewer. Everything just works out of the box
Amount of work I have to do on k3s vs amount of work I have do on "classic" setup... well, k3s is greatly simpler. And I can throw out some problematic distro idiosyncracies.
People are assuming k3s or managed Kubernetes (like GKE) is too complex or requires too much maintenance. I have bunch of applications running on raspberry pi and another ubuntu machine at home as classic setup and running GKE on production for our company. I can say that I spend more time on classic setup and simple things on Kubernetes like zero downtime deployment or auto cert with Let's Encrypt much harder on classic setup.
The overhead of Kubernetes's runtime (Go) is nonzero. It's meant to run on Xenons which is going too far for a home server. You have the slow Go GC then you add on top of it a bunch of PHP and Ruby apps like Nextcloud, sprinkle a bit of unnecessary virtualization, and you've got a really slow machine that burns through more money on your electric bill than an equivalent EC2 would cost.
Since everything I use at home is Linux, they're all "servers" already. They're all on a Wireguard network and share NFS mounts that I can access anywhere. No PHP, No Letsencrypt certificates, No Docker, No Kubernetes, No wasting electricity, No fiddling, No "git commiting every edit". My setup works just as well in a house with a static IP from an ISP as it could in a van with an antenna receiving 4G data.
I have a very similar setup... but my k3s cluster is the graveyard for old devices. I love that my old devices are still in service, with services reasonably balanced between them. I live that each service I run has a simple configuration checked into a git repo. I love that when I want to run special workloads, I can add cloud based nodes.
But if i was just running a single server I would have stuck with docker compose!
I am curious, what are the benefits of running kubernetes on your personal servers? Seems like introducing unnecessary complexity.
You pay upfront for some more components. In exchange, you get a lot of benefits.
'classic' servers are only simpler if you can postpone dealing with their complexity.
Classic servers are _always_ simpler. Unless you need to manage a whole lot of them.
It's like we suddenly have forgotten how to run several services on a single machine. ;)
If you're running on a single-node machine and you don't __NEED__ (or can't have it anyway since you're on a single machine/network/power "cluster") high availability, why bother with K8s over a much simpler vanilla Docker daemon that auto-restarts containers when they crash?
Kubernetes manages your containers and keeps them up at the scale you want.
If you need to scale, you can. If your VPS goes down, you don't care, you just get a new one and plug it in. Your cluster was never down.
If you want to move your cluster, you install k8s somewhere else and push the yaml.
That is wishful thinking at best. If your software doesn't know what to do with the additional compute-power, nothing is going to happen.
Your postgresql DB will not automagically scale. Your mail server won't. Your static web server either. Nor your backup software or image server, unless they were specifically made for that.
And your bottleneck will still be the disk loading your actual data you want to serve.
This should be considered an interesting case study but not a guide on how to actually build your own home servers.
And "install k8s somewhere else and push the yaml" doesn't move your data.
This wasn't a guide on how to build home servers. It was simply a post on how this person uses a tool to manage their home server.
Do you have to be so negative about everything?
I'm curious about how much memory and CPU K3S uses on the manager and nodes in practice. I'm currently running Swarm at home because the overhead is minimal on managers and nodes when it comes to CPU and memory usage.
Great write up! I run a similar setup and documented the high level architecture here:
https://bdw.to/personal-infrastructure.html
k8s is all about automating resource allocation/topology among other things. this is a focus on what's outside of the scope of the processes that are part of the system.
imo, personal servers rarely require this.
Just buy a Synology NAS or two and you have everything in a consistent system.
I love the k3s control-plane in the cloud and the raspberry pi worker nodes running from home (connected over VPN). Not a bad use for all of those raspberry pis we developers seem to accumulate over the years!
Truenas plus HA microk8s inside KVM VMs running in Mesos for myself.
Adding home devices (rpi) to the remote cluster as k8s nodes through the use of wireguard seems strange to me as the the latency between nodes is supposed small within a k8s cluster.
Do you think it will work ? Even after adding more of them ?
It would be a nice feature.
Very cool. I use GKE with nginx as my ingress controller. The Google LB ingresses are too expensive for this sort of thing.
Also appreciate the cert manager advice. Thank you!
k3s looks like it removes some of the "moving parts" from Kubernetes, but for a single node setup, docker-compose might be simpler to manage.
I've recently moved my "personal infrastructure" from a docker-compose setup to a k3s setup, and ultimately I think k3s is better for most cases here.
FWIW, my docker-compose setup used
https://github.com/nginx-proxy/nginx-proxy
and it's letsencrypt companion image, which "automagically" handles adding new apps, new domains, and all ssl cert renewals, which is awesome. It was also relatively easy to start up a brand new fresh machine and re-deploy everything with a few commands.
I started down the route of using kubeadm, but then quickly switched to k3s and never looked back. It's now trivial to add more horsepower to my infrastructure without having to re-create _everything_ (spin up a new EC2 machine, run one command to install k3s & attach to the cluster as a worker node). There's also some redundancy there, as any of my tiny ec2 boxes crashes, the apps will be moved to healthy boxes automatically. I'm also planning on digging out a few old Raspberry Pi's to attach as nodes from home (over a VPN) just for funsies.
Ultimately k8s certainly has a well earned reputation for having a steep learning curve, but once you get past that curve, managing a personal cluster using k3s is pretty trivial.
I found k3s to be VERY noisy in logs - I definitely recommend log2ram if you want your SD card to last very long! (Or use different external storage). I had two Pi nodes with corrupted filesystems until I made the switch.
https://mcuoneclipse.com/2019/04/01/log2ram-extending-sd-car...
Awesome protip, thanks! I normally keep a rolling log history backed up to S3 but I'm thinking for these Pi nodes there's probably going to be literally nothing of consequence running on them, so this looks like an ideal solution!
I’ve got 5 nodes (one remote VPS, 3 Raspberry Pi’s, and a NAS at home) and I still just use Docker Compose. Compose is just way less verbose and far simpler.
The advantages of Kubernetes seem to be adding an overlay network, config and secrets management. None seem particularly useful for me though. Generally, I care which host a service I’m running is on, so tacking affinities to everything is additional work and doesn’t help with scalability. Since services are already pinned to a host, configuration is easy to manage by deploying it to the server. And, since most traffic is on the local network, an overlay is not really useful. I use a set of containers that form an SSH reverse proxy to expose some home services to remote services without exposing them to the public.
All this is orchestrated with Ansible. My config are still not published online yet, but I have an older blog post about it.
https://blog.iamthefij.com/2018/01/21/docker-orchestration-f...
Docker-compose isn't going to help you with Let's Encrypt, you're going to need to keep resolving that problem with each app you have or find some other way to tackle it, because you've picked a way to deploy containers, and don't have any kind of centralized cloud system at your back.
In my comments, I mention that the author could have used Kilo, which would have been a Kubernetes-native way to manage their WireGuard system, and to connect the Pi & their other systems to their existing K3S system.
I agree that docker-compose might be simpler, but there's a very very limited realm of concerns that that will ever serve, where-as Kubernetes's / the Cloud Native ambition is to manage everything you would need in your cloud. Whatever you need, should, ideally, be managable within the same framework.
DNS is another decent example, where Kubernetes will help you manage domain names, somewhat. Still work to be done there but there are some good starts. There's so many operators, all of which purport to let you manage these services in a "cloud native" way. We're still learning, getting better at it, but being able to manage all these thing semi-consistently, via the same tools, is a superpower.
https://github.com/operator-framework/awesome-operators
Also just the question of short term wins vs long term use. You will not use docker-compose at your job. More and more people are going to be using Kubernetes to manage a wider and wider variety of systems & services, making more and more capabilities managed by Kubernetes.
If you run a single server, getting a Let's Encrypt cert is as easy as running a cron script. Then you just run a single instance of Nginx with the cert directory mounted as a volume. You will have to do a little extra work to maintain the nginx configuration to point to all your other containers, but it's generally just copy/pasting a block and changing the port to add a new service.
Kubernetes is cool but the only reason to run it on a single instance is because you want to.
Also docker-compose isn't used to host stuff in production very often but it is used to manage running local instances quite a bit. I wouldn't write it off as not worth learning.
Comments in this discussion are filled with similar sort of 'it's easy, just do it by hand' sort of advice.
Let's Encrypt is an example of one thing that you can either get practically for free in Kubernetes, or which you can keep fiddling & expanding your nginx configuration with, as you add domains & subdomains. Keep hoping your cron jobs are running as they need to.
But it's one example. And it's an easy example.
If I want a postgres database, it's pretty easy to set up postgres too. I can do it in under 30 minutes. But I should also set up read replicas. I should set up backups. Somehow I have to leave enough documentation so that the next time I need to operator on these systems, or if someone else does, they can follow my work & understand what's happening. I should probably set up monitoring on all these systems, so that if something goes wrong with a replica & lag raises, or if backups don't write, I get notified. I should add metrics so I can see problems as they are developing, increasing my ability to sleep soundly. There's a fractal web of concerns around running & operating software, and the classical path is to become ever more an expert on the niche. To keep diving in, trying to find your own way forward, pick your own stack of tools to help, and document document document test test test everything. Run some chaos engineering tests to make sure these systems you've strung together are going to not lose all the data if something some day does go wrong.
Kubernetes should be the default model for anyone who cares about themselves and/or running software effectively. A off-the-shelf postgres operator will let me define a Postgres resource, with as many replicas as I please, and backup schedules as I like. One can install k3s on one node or a dozen nodes, install the helm chart for a postgres-operator, create a manifest for a postgres database, and apply it, in under an hour. Whether you want a tiny small one node postgres, or to run a many replica geo-distributed postgres system, this is a better way to do it, that will yield more predictable results, that more engineers will be able to recognize & identify & work with the pieces of, than what would take me a week of time to set up. Because it flips the model. Rather than bottom-up engineering the pieces of the system, Kubernetes empowers me to define a top-down set of things I desire, and it lets the automation work to fulfill those asks. It take the grunt work out, paves the cowpaths.
And Kubernetes does it consistently. After getting a backed up, monitored, replica'ed postgres going in an hour, I can use the same skillset, the same patterns, to deploy Redis. And the next service after that.
Even if you are single node, perhaps you might want to use SQS, or S3, or soon Lambdas; maybe as a part of some JAM Stack. Well, with Kubernetes, you can manage that under the same roof too, whether you run Amazon's EKS, or your own Kubernetes at home, or any other Kubernetes. You can manage everything, consistently.
https://aws.amazon.com/blogs/containers/aws-controllers-for-...
It's a different operational paradigm, and I think almost all operators would have a better experience, learning & benefiting from a top-down control system like Kubernetes. Using bottom-up engineering to deploy services seems appealing to a large amount of the seasoned engineers of today, it seems easier to them, but they don't get how powerful it is having automation at your back, they don't understand how much clearer operations are when all of your state is centralized & something you can work on with consistent tooling, and they haven't seen how much nicer it is when everything you have can be managed under one roof. They all think it's easy, learning this service, learning that service, that the services are all simple. But lacking this top down system of control, lacking something consolidated & centralized to mange everything, lacking the live-agents/controllers that are tirelessly working to keep things running, it's a huge loss. I hope future engineers can enjoy a much better constructed, much less piecemeal environment.
Traefik reverse proxy can be hosted using docker compose, which will deal with fronting each container and offloading ssl.
You can communicate between containers using hostnames. I personally have a container keep DDNS updated for my home setup.I'm not sure what else you mean by DNS.
Does anyone have any articles highlighting the benefit of K3S? I am struggling to understand its pros and cons.
we run k3s in production. its a much much nicer experience than even EKS.
it integrates properly with spot instances, etc. I would pay money for k3s at this point.
On my home server I use OpenBSD as an orchestrator.
It has cool features like processes and directories to isolate different things, it's pretty neat.
I consider this a form of procrastination, and I've been guilty of this myself. There are probably some unpleasant high-value tasks this person needs to be doing instead, so they convince themselves that this is high-value and do this because it's pleasant to them.
You can actually weaponize this to do things that are "less unpleasant" but still worthwhile. Such tasks are much more palatable in the presence of another, more unpleasant task that you'd have to do instead.
Story time: at Google the most unpleasant task employees have to do is writing their own (and their reports', if they have any) performance review, which results in a phenomenon called "perfcrastination", where a shit ton of work gets done just to postpone writing Perf until the last possible moment.
> Story time: at Google the most unpleasant task employees have to do is writing their own (and their reports', if they have any) performance review, which results in a phenomenon called "perfcrastination", where a shit ton of work gets done just to postpone writing Perf until the last possible moment.
Oh, Google does 'self evaluations' too? That's reason enough to never even apply. That ranks among one of the worst ideas HR has ever had.
Microsoft did that too. Now I'm at another company and we do it here as well. I'm not a fan of it, but having someone else write up your review sounds like a wonderful dream.
The way I see it, you write your review, it goes up the management chain, they get to pick and choose what they agree with and now they have evidence since you admitted to doing it (or not, if it's missing in your self-eval), and then come back and justify why you get the bonus/raise or not using your own statements. Anyway, off-topic for this post.
Worse, you have to "apply for promotion" if you want to get promoted, and the decision is made by a remote committee of people who don't even know anything about what you did first hand - you have to beat your chest like an ape, and get endorsements from people a level or two up from where you are (which can be a big problem outside Mountain View, since there are fewer such people there). The whole process is utterly demeaning and demoralizing, especially if you're from cultures where beating one's chest is frowned upon (such as e.g. Russian).
You routinely see people do ridiculously awesome work and then get turned down for promo. You also routinely see people promoted who just sat in the meetings with the right people and took the credit for "launching" stuff. Major part of the reason why I left.
I recently rebuilt my home server from scratch, and after a bunch of research decided to run Proxmox [1] on it.
I considered Kubernetes but two things really held me back: I've never installed/managed Kubernetes itself (and didn't really want to learn with my "production" home network), and I wanted to be able to run things other than Docker containers such as a Windows VM. I've used stuff like VMware and VirtualBox before and Proxmox is closer to those, but without the overhead of everything being a full VM.
My old system was a "The Server" style setup as described here, running everything on a single Ubuntu bare metal machine (acting as a NAS, running Plex, Unifi, HomeAssistant, and a dozen other applications). It was getting old, had gone through some major OS upgrades and was about due for another. I used to do more "experimental" work on other old hardware I had laying around, but there's so much overhead involved that it often killed my motivation to learn something new.
It's only been a couple months but I'm quite happy with Proxmox so far. I run most stuff in LXC containers with Alpine, though have at least one Ubuntu (Unifi Controller) and have played with some VMs (qemu): Hass-io [2] (which is designed for the Pi) and Windows 10 (just to try it). I actually made my Windows 10 image into a template, so I can quickly stand up an instance to try something out on Windows without hassle of reinstalling or worrying about breaking an actual PC.
The LXC containers are very lightweight, and what I love is I can upgrade, experiment or swap out a service knowing I have good isolation from everything else -- just restore backup or revert to a snapshot. If I need to run the latest Debian Unstable to try some particular application out, I can do that without worry or having to setup another machine.
Even porting everything over was very smooth, since I didn't have to worry about interaction between things or worry too much about permissions -- just install where it wants to go, whatever. (I used to try to keep my server setup in a consistent way, and it was often painful when some app had a different way of running or configuring itself or expected a different filesystem or user+group layout than I wanted). Now I just make symlinks in /root of each container and/or use Proxmox's "notes" feature to remind my future self where config files or other important things are located, and that's worked fine so far.
What I can't attest to is what's involved in running it long-term: for example I haven't had to upgrade Proxmox itself, yet.
On the plus side, it's removed an excuse preventing me from learning new things, and I donated/recycled a big pile of old hardware I had sitting in the basement ("just in case" I wanted to experiment on something).
[1]
[2]
https://www.home-assistant.io/hassio/
This is such a great write-up. I hope we continue to evolve the modern ops set-up on the metal, make it easy for folks to onboard into something that scales both big & high, and small & low. This is, imo, enormously good tech to learn, and I feel like many people are wasting their time learning "things that work for them" or that "aren't as complex" when those personal choices & investments will, in all due chance, not pay off elsewhere, will not be things other people are as likely to know or use or enjoy. And k3s is so easy to use, works so well, that I think many folks kind of cheat themselves out of a better experience when they pick something a little more legacy, like Docker-compose.
Also noteably that this is just a first step, & we can get better & better at creating a wider system of services for the personal server from Kubernetes. For example I use the zalando postgres-operator, which lets me just ask for/apply a kubernetes object, and presto-chango, i have a postgres database with as many replicated instances as I want. The author here similarly enjoys having Let's Encrypt being ambient available. Managing more and more systems within Kubernetes will continue to scale the operational network effect of choosing tech like Kubernetes, tech that doesn't just run containers, but is an overaching cloud. Kubernetes is Desired State Management, a repository for all of your (small) cloud's state.
I'd consider maybe replacing some of the hand-made Wireguard work done here with Kilo[1], which can either run as a Container Network Interface (CNI) plugin or can wrap your existing Kubernetes networking CNI provider (by default K3s uses Flannel). This will automate the process nicely, let you manage things like peers in Kubernetes easily. When the author connected the RPi to their existing cluster, that's exactly the sort of multi-cloud topology that Kilo is there to help you run & manage, and from inside of Kubernetes itself! Kilo rocks.
Also worth noting that some of the latter half of this write-up is optional. Switching K3's own ingress out for nginx is the authors opinion, for example. You may/may not need a mail server. The write-up is pretty long; I think it's worth highlighting that the core of what's happening here, what others would need to do, is pretty short.
I do enjoy that the author started the steps by running gpg & sops, to make keys to secure this all. This is pretty rigorous. It's good to see! I don't think all operators have to do this, but it showed that the author was taking it fairly seriously.
For reference, I run a 3 node K3s cluster at home, a separate single K3s instance, and am planning on trying to convert my laptops & workstations over so that operationally I get the same kind of great observability & metrics & managability on them that I enjoy on the cluster. I'd like to cloud-nativize more of of my day to day computing experience, for consistency sake, & because I think uplifting many of the local things on my machine into pieces of a larger Cloud body of state will give me more flexibility & capabilities that I can enjoy playing with. I look forward to becoming less machine-centric, and more cross-machine cloud-centric.
[1]
> planning on trying to convert my laptops & workstations over
Eh, ok? So set up a mesh vpn, like zerotier - when you close your notebook slack migrates to your workstation?
(I know, you highlighted monitoring - but nothing stops you from running statsd or something on your laptop).
Kilo is easier to set up and integrates with the pattern of how I want to set up & run all my systems. I get WireGuard mesh "for free" because I run Kubernetes, thanks to Kilo. Your alternatives have no appeal, all involve manual work that I don't have to set up myself, because I am moving towards a better managed, better automated form of existence. Setting up little one off systems is an old dark terrible world.
Same thing with monitoring. Yes, I can and do use ansible to go set up systemd daemons on my workstations, to run local prometheus & node-exporters. Today. But this is one of many things I have to go re-run every time I bring up a new node, new laptop. And if I change configuration, update my ansible scripts, then I have to go update all my nodes, go find each laptop, turn it on, re-run the script on it.
If I want to add another admin to help me operate these systems, a sibling, a lover, they would have to go dig through my scripts to understand what I've done and how I've set things up in my environment.
Kubernetes makes all of this not bad. It provides a centralized top down control of all my systems. If I had kubernetes, I add a node-exporter daemonset, and it'd be running, in a consistent fashion, on every node. If I need to change the configuration latter, I change the daemonset, and it changes on all my laptops, whenever they do boot up. Because it's Kubernetes, there's a practice, a standard way that other operators can see & understand & expect for how I have these concerns managed, it's easy for them to see what daemonsets are running, easy for them to see how they are configured, it'll all look bog standard to any operator.
Nothing stops me from running statsd or whatever. But doing so is a pain in the ass to manage and maintain, and I can not bloody wait to be free of such rubbish unmanaged DIY computer-hugging. I want a better way of operating my pool of resources.
I'm still unconvinced "adding kubernetes" is likely to make simple setup simpler - but to each their own.
But regarding kilo - doesn't sound like it makes much sense for laptop/road warrior?
https://github.com/squat/kilo#step-4-ensure-nodes-have-publi...
> At least one node in each location must have an IP address that is routable from the other locations. If the locations are in different clouds or private networks, then this must be a public IP address.
Makes it difficult to use you laptop from a coffee shop?
I'm running into this problem now, trying to set up my laptop as a k3s node. I'm still exploring my options, but I probably have to hand-wire up WireGuard on the roadwarrior nodes manually, & I hope/think that might be sufficient, but haven't tried yet.
There is a great VPN mode[1] in Kilo, if I just wanted to connect my laptops or workstations to the k3s cluster.
[1]
Might be worth looking into zerotier if what you need is transparent mesh vpn with fairly smart routing (local lan traffic stays local). Never tried to combine it with k8s, so ymmv.