Resource tuning in K8s

K8s and limits

On K8s, for every pod you can define how much memory and CPU the pod needs. To make things “simpler”, K8s define two set of values: requests and limits, both for CPU and memory.
After some trouble on GCP, I was forced to dig a bit in the subject.


From the definition:

When you specify the resource request for containers in a Pod, the kube-scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The kubelet also reserves at least the request amount of that system resource specifically for that container to use

First of all, the suggested fragment example, and then the explanation:

    cpu: "0.90"
    memory: "1024Mi"
    memory: "1024Mi"


Lets start from memory which is more simple

Memory limits

First of all, memory is the only resource you cannot multiplex. If you look at cloud pricing, there is always a linear formula connecting price and memory.

If you put memory limits > memory requests, you are bidding on overbooking.

If a pod start consuming more than its limits, it will start to compete with the others…and if it cannot find it it will get killed.

So a conservative approach is to put limits==requests, because so K8s will guarantee the pod will have enough resource to run.

CPU limits

CPU memory is indeed ‘compressible’ resource. On AWS, some instances get “paused” when they consume too much CPU, and there is the concept of CPU credit

(also enforcing on CPU is critical for cloud providers, and K8s limits always works :)

Here the best approach is to set only the CPU requests, to be sure you can get at least a minimum of spare CPU.

A practical example and some helper commands

To make things a little more complicated, you need to take care of other things like the number of replicas, the rollout strategy and the number of container inside your pods.

For instance, if you use the standard rollout strategy, you will likely have a “spike” in the resource needs of your pods, because K8s will rollout new pods before shutting down the old one, this to offer a better QoS.

This command will collect the data of pods, but it will generate a malformed JSON output

# Only running pods
kubectl get po  -o=jsonpath='{range .items[*]}[{}, {.spec.containers[*].resources}] {end}'

For the explanation look at this thread.


I hit some customers enforcing the need of requests and limits for CPU and memory. They will not deploy without these values.
So for memory I ended up with following the rule of the same values, to avoid get killed.

For CPU, I put two different value and I think it is a good thing because you can guarantee the fairness: you can limit the CPU your system will be able to take: it will avoid thrashing the node.

Also keep in mind these limits are enforced on Cloud providers, but not very well respected by vanilla docker daemons (it depends on your Linux kernel modules): I have very little luck to enforce it on docker, but minikube seems able to enforce it.

About java containers

Estimating memory on JVM is a little tricky because Java need a bit more memory than the declared on the command line: for sure JVM is fatter than a python interpreter.

Generally speaking, a simple Spring Boot application connecting to a database and exposing some minimal REST services cannot really run under a 400Mi limit pod.

And yes, if your microservices need more than 1024 Mi to just start, it is not so micro anymore :)



A note about K8s metric

A mebibyte (MiB) is a unit of measurement used in computer data storage and equals to:

1 MiB=2^20= 1,048,576 bytes

To avoid confusion we did not use Megabytes anymore, because it can be confused with 10^6.

Also the same approach is used for CPU which is expressed in milli (1000m=1CPU)


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.