3
votes

Hi i am new to kubernetes.

1) Could not able to scaled container/pods in worker nodes. and its memory usage always remain zero. any reason ?

2) Whenever i scaled pods/container its always create in master node.

3) Is there any way to limit pod on specific nodes ?

4) How pods divide when i scaled ?

any help appropriated.

kubectl version

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

kubectl describe nodes

Name:               worker-node
Roles:              worker
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=worker-node
                    node-role.kubernetes.io/worker=worker
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 19 Feb 2019 15:03:33 +0530
Taints:             node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------    -----------------                 ------------------                ------                       -------
  MemoryPressure   False     Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:13 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True      Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:23 +0530   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False     Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:13 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True      Tue, 19 Feb 2019 18:57:22 +0530   Tue, 19 Feb 2019 15:26:13 +0530   KubeletReady                 kubelet is posting ready status. AppArmor enabled
  OutOfDisk        Unknown   Tue, 19 Feb 2019 15:03:33 +0530   Tue, 19 Feb 2019 15:25:47 +0530   NodeStatusNeverUpdated       Kubelet never posted node status.
Addresses:
  InternalIP:  192.168.1.10
  Hostname:    worker-node
Capacity:
 cpu:                4
 ephemeral-storage:  229335396Ki
 hugepages-2Mi:      0
 memory:             16101704Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  211355500604
 hugepages-2Mi:      0
 memory:             15999304Ki
 pods:               110
System Info:
 Machine ID:                 1082300ebda9485cae458a9761313649
 System UUID:                E4DAAC81-5262-11CB-96ED-94898013122F
 Boot ID:                    ffd5ce4b-437f-4497-9337-e72c06f88429
 Kernel Version:             4.15.0-45-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.1
 Kubelet Version:            v1.13.3
 Kube-Proxy Version:         v1.13.3
PodCIDR:                     192.168.1.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----    ------------  ----------  ---------------  -------------  ---
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:
  Type     Reason                Age                     From                       Message
  ----     ------                ----                    ----                       -------
  Normal   Starting              55m                     kube-proxy, worker-node  Starting kube-proxy.
  Normal   Starting              55m                     kube-proxy, worker-node  Starting kube-proxy.
  Normal   Starting              33m                     kube-proxy, worker-node  Starting kube-proxy.
  Normal   Starting              11m                     kube-proxy, worker-node  Starting kube-proxy.
  Warning  EvictionThresholdMet  65s (x1139 over 3h31m)  kubelet, worker-node     Attempting to reclaim ephemeral-storage

enter image description here

1

1 Answers

3
votes

This is very strange, by default kubernetes has the label to exclude the master from pod execution.

kubectl get nodes --show-labels

Now check for the label

node-role.kubernetes.io/master=true:NoSchedule

If your master doesn't has this label, you can retain the master with:

kubectl taint nodes $HOSTNAME node-role.kubernetes.io/master=true:NoSchedule