4
votes

Is there a way to get the actual resource (CPU and memory) constraints inside a container?

Say the node has 4 cores, but my container is only configured with 1 core through resource requests/limits, so it actually uses 1 core, but it still sees 4 cores from /proc/cpuinfo. I want to determine the number of threads for my application based on the number of cores it can actually use. I'm also interested in memory.

2
I want to point out that while @janos answer is helpful, giving a pod 1 CPU unit doesn't limit it to only using 1 core... you should continue to allow your application to run in multiple threads, especially if there is a decent amount of idling time. On the other hand if your application is crunching numbers 100% of the time with no idling (not event based) then yes you may want to limit the amount of threads in your application knowing that you only have 1 CPU unit assigned. kubernetes.io/docs/concepts/configuration/…Michael Butler

2 Answers

10
votes

Short answer

You can use the Downward API to access the resource requests and limits. There is no need for service accounts or any other access to the apiserver for this.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: dapi-envars-resourcefieldref
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox:1.24
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv MY_CPU_REQUEST MY_CPU_LIMIT;
          printenv MY_MEM_REQUEST MY_MEM_LIMIT;
          sleep 10;
        done;
      resources:
        requests:
          memory: "32Mi"
          cpu: "125m"
        limits:
          memory: "64Mi"
          cpu: "250m"
      env:
        - name: MY_CPU_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: requests.cpu
              divisor: "1m"
        - name: MY_CPU_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: limits.cpu
              divisor: "1m"
        - name: MY_MEM_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: requests.memory
        - name: MY_MEM_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: limits.memory
  restartPolicy: Never

Test:

$ kubectl logs dapi-envars-resourcefieldref
125
250
33554432
67108864

Long answer

Kubernetes translates resource requests and limits to kernel primitives. It is possible to access that information from the pod too, but considerably more complicated and also not portable (Window$ nodes, anyone?)

  • CPU requests/limits: /sys/fs/cgroup/cpu/kubepods/..QOS../podXX/cpu.* : cpu.shares (this is requests; divide by 1024 to get core percentage), cpu.cfs_period_us, cpu.cfs_quota_us (divide cfs_quota_us by cfs_period_us to get cpu limit, relative to 1 core)
  • Memory limit: /sys/fs/cgroup/memory/kubepods/..QOS../podXX/memory.limit_in_bytes
  • Memory request: this one is tricky. It gets translated into oom adjustment scores under /proc/..PID../oom_score_adj . Good luck calculating that back to memory request amount :)

Short answer is great, right? ;)

0
votes

You can check node capacities and amounts allocated with the kubectl describe nodes command. For example:

kubectl describe nodes e2e-test-node-pool-4lw4

Each Container of a Pod can specify one or more of the following:

spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory