💾 Archived View for koyu.space › vydyck › tech › containers › k8s.gmi captured on 2023-07-10 at 14:16:12. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
Ask HN: HAve you left kubernetes?
https://news.ycombinator.com/item?id=32304834
quotes from users:
* "allocate single-node node-pools with taints that map 1:1 to each stateful component" so basically dedicate a vm to a pod: no more need to manage every individual node, but still have the advantage of vm resource isolation
* redis is hard in k8s, when configured to load from disk and memory of the container is "tightly bound": Redis apparently uses ~400% of its steady-state memory while reading
the AOF tail of an RDB file — getting the container stuck in an OOM-kill loop until you come along and
temporarily de-bound its memory.)
## tip: MALLOC_ARENA_MAX parameter: default is 8 * nproc
## difference between single node pools vs pod constraints like anti affinity?
### It would make node pool operations like version upgrades more predictable since you'd know for sure which apps are running a given node pool. It can also make monitoring resource usage a little easier since you can just monitor at the node level
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <hostname>
this ensures that your workload will be rescheduled on the matching node
Operators shouldn't be viewed as replacements for stateful operations knowledge, but it's probably what
they'll be used for.
objects.
Amazon EKS = k8s
but there is also a k8s operator called "EKS" to manage Elastic search in k8s
despite this operator, this customer still used "pod anti affinities and taints" to give each ElasticSEarch node a full k8s node to avoid resource contention
An added benefit is that ElasticSearch handles node restarts & upgrades pretty gracefully (no surprise since ElasticSearch is a redundant cluster by itself)
https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups
## avoid "smelly" k8s setups (learn how the system works and then restart from scratch)
## Don't let your devs learn about k8s on the job.
## Let them run side-projects on your internal cluster.
## Give them a small allowance to run their stuff on your network and learn how to do that safely.
## Give your devs time to code review each other's internally-hosted side-projects-that-use-k8s.
## Reap the benefits of a team that has learnt the ins and out of k8s without messing up your products
.