1
votes

In Openshift/Kubernetes, I want to test how my application (pod) that consists of 2 containers performs on machines with different number of cores. The machine I have at hand has 32 cores, but I'd like to limit those to 4, 8, 16...

One way is using resource limits on the containers, but that would force me to set the ratio on each container; instead, I want to set resource limits for whole pod and let the containers compete on CPU. My feeling is that this should be possible, as the containers could belong to the same cgroup and therefore share the limits from the scheduler POV.

Would the LimitRange on pod do what I am looking for? LimitRange is project/namespace -scoped, is there a way to achieve the same with finer granularity (just for certain pods)?

2

2 Answers

2
votes

As per documentation: resource constraints are only applicable on container level. You can however define different requests and limits to allow the container to burst beyond the amount defined in requests. But this comes with other implications see Quality of Service.

The reason for this is that some resources such as memory cannot be competed about, as it works for CPU. Memory is either enough or too less. There is no such thing in Kubernetes as shared RAM. (If your are not explicitly call the relevant systemcalls)

May I ask, what the use case for Pod internal CPU competition is?

0
votes

How about controlling resource usage inside your K8S cluster with resource quota. This should enable you to benchmark cpu/memory usage by your pod inside dedicated namespace with help of kube_resourcequota monitoring metrics, under different conditions set with LimitRange or directly with Container`s resources limits & requests.

What I mean exactly is to set resource quota similar to this one:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-cpu-demo
spec:
  hard:
    requests.cpu: "1"
    requests.memory: 1Gi
    limits.cpu: "2"
    limits.memory: 2Gi
    pods: "1"

run pod with resource limits and requests:

 ...
 containers:
    - image: gcr.io/google-samples/hello-app:1.0
      imagePullPolicy: IfNotPresent
      name: hello-app
      ports:
      - containerPort: 8080
        protocol: TCP
      resources:
        limits:
          cpu: "1"
          memory: 800Mi
        requests:
          cpu: 900m
          memory: 600Mi 
  ...

and just observe in monitoring console how the pod performs* for instance with Prometheus:

enter image description here

* Green - represents overall memory usage by Pod, Red - fixed/hard resource limits set with ResourceQuota

I guess you opt for reducing the gap between lines to avoid undercommitted system, and at the same time avoid Pod failures like this one:

  status:
    message: 'Pod Node didn''t have enough resource: cpu, requested: 400, used: 893,
      capacity: 940'
    phase: Failed
    reason: OutOfcpu

Of course ideally would be if this memory usage trend were stack on cockpit chart with some other custom/performance monitoring metric of your interest.