Categories
Software development

Why move from 2 to 4 GB nodes in k8s?

I upgraded to a higher performing set of nodes in my k8s cluster. It was easy to upgrade, and I freed up 1GB of memory.

Now my applications can use that memory or I can rely on it for bursting, w/o needing another node for memory! Below you’ll find the steps I took to upgrade and some quick math.

Upgrade steps:

  1. Create a new pool for bigger nodes
    1. Members in a pool all of have the same specs
  2. Add nodes to the new pool (I used 4GB nodes)
  3. As nodes come online in the new pool, scale down nodes in the old pool (my old pool had 2GB nodes)
  4. Once all new nodes were online in the new pool, I deleted the old pool
  5. k8s took care of moving my workloads around! I love k8s.

Why did I do this?

The fact that this cost info was readily available at my finger tips is what prompted me to take action. Not because I’m paying the bill, but because its easy for me (as a developer) to see what my system is costing. And I don’t want it to run inefficiently.

The small nodes cost $10, and each came with 2GB of memory. I had three nodes, totaling 6GB of memory. On average, with my application running, this consumed 65% of the available memory or 3.9GB, leaving roughly 2GB left.

The bigger nodes cost $20, and each come with 4GB of memory. I have 2 nodes currently, totaling 8GB of memory. On average, with my application running this consumes 3.04GB.

That’s a savings of almost 1GB! In the container world, that’s power. Why? Each node runs system processes in the kube-system namespace, and the sum memory used by those processes on three small nodes is greater than the sum memory used on two bigger nodes.

Leave a Reply