💾 Archived View for koyu.space › vydyck › tech › containers › k8s.gmi captured on 2023-07-10 at 14:16:12. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-01-29)

-=-=-=-=-=-=-

Ask HN: HAve you left kubernetes?

https://news.ycombinator.com/item?id=32304834

quotes from users:

don't put the entire stack on k8s; only put "statefull" components. Keep statefull components on dedicated vms (= components/pods/containers/applications that store data)

if you need stateful components in k8s you can:

* "allocate single-node node-pools with taints that map 1:1 to each stateful component" so basically dedicate a vm to a pod: no more need to manage every individual node, but still have the advantage of vm resource isolation

* redis is hard in k8s, when configured to load from disk and memory of the container is "tightly bound": Redis apparently uses ~400% of its steady-state memory while reading

the AOF tail of an RDB file — getting the container stuck in an OOM-kill loop until you come along and

temporarily de-bound its memory.)

## tip: MALLOC_ARENA_MAX parameter: default is 8 * nproc

## difference between single node pools vs pod constraints like anti affinity?

### It would make node pool operations like version upgrades more predictable since you'd know for sure which apps are running a given node pool. It can also make monitoring resource usage a little easier since you can just monitor at the node level

set hostAffinities on PersistentVolumes.

nodeAffinity:

required:

nodeSelectorTerms:

- matchExpressions:

- key: kubernetes.io/hostname

operator: In

values:

- <hostname>

this ensures that your workload will be rescheduled on the matching node

don't host databases in k8s

get very comfortable with k8s before moving to "stateful loads"

many db vendors have dedicates operators which introduces best practice deployments with less operational fuss provisioning the instances. Tooling like Strimzi for Kafka as an example here.

=> k8s-operators.gmi

Operators shouldn't be viewed as replacements for stateful operations knowledge, but it's probably what

they'll be used for.

postgresql operator exists

Crossplane is great way to create and manage resources across cloud providers, MSPs via kubernetes

objects.

https://crossplane.io

EKS != EKS

Amazon EKS = k8s

but there is also a k8s operator called "EKS" to manage Elastic search in k8s

despite this operator, this customer still used "pod anti affinities and taints" to give each ElasticSEarch node a full k8s node to avoid resource contention

An added benefit is that ElasticSearch handles node restarts & upgrades pretty gracefully (no surprise since ElasticSearch is a redundant cluster by itself)

GKE has an "aggressive update schedule" ? customer used cloudsql for smaller db's but larger ones on GCE with proxysql to reduce downtime.

Managed instance groups "work as advertised" - has nothing to do with k8s though?

https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups

k8s lower maintenance burden (pets vs cattle); however, k8s yaml/charts/... also is high maintenance

depends: another customer uses "managed service" (=pure outsourcing) for "statefull stuff" to ease "operational burden", the timed saved is used to maintain the "yaml stuff" which they only need to touch once every 2 months if that

redis, sql etc are actually NOT RECOMMENDED (by VENDOR?) on k8s

EPHEMERAL redis is well suited to k8s though - Durable redis is not.

management tips

## avoid "smelly" k8s setups (learn how the system works and then restart from scratch)

## Don't let your devs learn about k8s on the job.

## Let them run side-projects on your internal cluster.

## Give them a small allowance to run their stuff on your network and learn how to do that safely.

## Give your devs time to code review each other's internally-hosted side-projects-that-use-k8s.

## Reap the benefits of a team that has learnt the ins and out of k8s without messing up your products

.