Short answer
You can use the Downward API to access the resource requests and limits. There is no need for service accounts or any other access to the apiserver for this.
Example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-resourcefieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox:1.24
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_CPU_REQUEST MY_CPU_LIMIT;
printenv MY_MEM_REQUEST MY_MEM_LIMIT;
sleep 10;
done;
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
env:
- name: MY_CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
divisor: "1m"
- name: MY_CPU_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.cpu
divisor: "1m"
- name: MY_MEM_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.memory
- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
restartPolicy: Never
Test:
$ kubectl logs dapi-envars-resourcefieldref
125
250
33554432
67108864
Long answer
Kubernetes translates resource requests and limits to kernel primitives. It is possible to access that information from the pod too, but considerably more complicated and also not portable (Window$ nodes, anyone?)
- CPU requests/limits:
/sys/fs/cgroup/cpu/kubepods/..QOS../podXX/cpu.*
: cpu.shares (this is requests; divide by 1024 to get core percentage), cpu.cfs_period_us, cpu.cfs_quota_us (divide cfs_quota_us by cfs_period_us to get cpu limit, relative to 1 core)
- Memory limit:
/sys/fs/cgroup/memory/kubepods/..QOS../podXX/memory.limit_in_bytes
- Memory request: this one is tricky. It gets translated into oom adjustment scores under
/proc/..PID../oom_score_adj
. Good luck calculating that back to memory request amount :)
Short answer is great, right? ;)