0
votes

I'm using an AKS cluster running with K8s v1.16.15.

I'm following this simple example to assign some cpu to a pod and it does not work. https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/

After applying this yaml file for the request,

apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo
  namespace: cpu-example
spec:
  containers:
  - name: cpu-demo-ctr
    image: vish/stress
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: "0.5"
    args:
    - -cpus
- "2"

If I try Kubectl describe pod... I get the following:

Events:
Type     Reason            Age        From               Message
----     ------            ----       ----               -------
Warning  FailedScheduling  <unknown>   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.

But CPUs seems available, if I run kubectl top nodes, I get:

CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%

702m         36%    4587Mi          100%

Maybe it is related to some AKS configuration but I can figure it out.

Do you have an idea of what is happening?

Thanks a lot in advance!!

2

2 Answers

1
votes

Kubernetes will decide where the pod can schedule on using node allocatable resources, not real resource usages. You can see your node allocatable resource using kubectl describe node <your node name>. Refer Capacity and Allocatable for more details. As I see the events logs, 0/1 nodes are available: 1 Insufficient cpu., you have just one worker node and the node has not cpu resource enough to run your pod with requests.cpu: "0.5". Pod scheduling is based on requests resource size, not limits one.

0
votes

The previous answer well explains the reasons why this could happen. What can be added is that while scheduling pods that has request you have to be aware of the resources that your other cluster objects consumes. System objects also use your resources. Even with small cluster you may have enabled some addon that will consume node resources.

So your node has a certain amount of CPU and memory it can allocate to pods. While scheduling the Scheduler will only take into consideration nodes with enough unallocated resources to meet your desired requests. If the amount of unallocated CPU or memory is less than what the pod requests, Kubernetes will not schedule the pod to that node, because the node can’t provide the minimum amount required by the pod.

If you describe your node you will see the pods that are already running and consuming your resources and all allocated resources:

Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
  default                     elasticsearch-master-0                         1 (25%)       1 (25%)     2Gi (13%)        4Gi (27%)      8d
  default                     test-5487d9b57b-4pz8v                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27d
  kube-system                 coredns-66bff467f8-rhbnj                       100m (2%)     0 (0%)      70Mi (0%)        170Mi (1%)     35d
  kube-system                 etcd-minikube                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16d
  kube-system                 httpecho                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         34d
  kube-system                 ingress-nginx-controller-69ccf5d9d8-rbdf8      100m (2%)     0 (0%)      90Mi (0%)        0 (0%)         34d
  kube-system                 kube-apiserver-minikube                        250m (6%)     0 (0%)      0 (0%)           0 (0%)         16d
  kube-system                 kube-controller-manager-minikube               200m (5%)     0 (0%)      0 (0%)           0 (0%)         35d
  kube-system                 kube-scheduler-minikube                        100m (2%)     0 (0%)      0 (0%)           0 (0%)         35d
  kube-system                 traefik-ingress-controller-78b4959fdf-8kp5k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         34d

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                1750m (43%)   1 (25%)
  memory             2208Mi (14%)  4266Mi (28%)
  ephemeral-storage  0 (0%)        0 (0%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)

Now the most important part is what you can do about that:

  1. You can enable autoscaling so that system automatically provision node and extra needed resources. This of course assumes that you ran out of resources and you need more
  2. You can provision appropriate node by yourself (depending on how did you bootstrap your cluster)
  3. Turn off any addon services that might taking desired resources that you don`t need