0
votes
"spec": {
  "containers": [
    {
      "name": "test",
      "image": "gcr.io/helloworldnodejs-1119/mytetest",
      "resources": {
        "requests": {
          "cpu": "500m",
          "memory": "128Mi"
        }
      },
      "env": [
        {
          "name": "GET_HOSTS_FROM",
          "value": "dns"
        }
      ],
      "ports": [
        {
          "name": "middleware-server",
          "containerPort": 8000,
          "hostPort": 8000
        }
      ]
    }
  ]
}


RajRajen:mytetest rajrajen$ kubectl describe pod lbmiddleware-6e1hi 
Name: lbmiddleware-6e1hi 
Namespace: default Image(s): gcr.io/helloworldnodejs-1119/mytetest 
Node: / Labels: app=mymiddleware,tier=mymiddleware 
**Status: Pending** 
Reason:
Message:
IP:
Replication Controllers: mymiddleware (1/1 replicas created) Containers: lb4btest: Image: gcr.io/helloworldnodejs-1119/mytetest

**Limits:
  cpu:        100m**

***State:      Waiting***
Ready:      False
Restart Count:  0
Events: 
FirstSeen LastSeen Count From SubobjectPath Reason Message 
Thu, 12 Nov 2015 11:05:01 -0800 Thu, 12 Nov 2015 11:05:16 -0800 5 {scheduler } ***failedScheduling Failed for reason PodFitsResources and possibly others***

It looks my Docker size is 130MB and even after provisioning 500MB in the POD ( GKE - container creation ) . Yet the execution sets up the Limits : cpu : 100m ..

https://cloud.google.com/container-engine/docs/tutorials/guestbook instead of .YAML file, I am using middleware-controller.json to create GKE - Google Container engine.

RajRajen:lb4btest rajrajen$ kubectl create -f middleware-controller.json
replicationcontrollers/lbmiddleware

Commands earlier used: gcloud container clusters create lb4b-test-cluster --num-nodes 1 --machine-type g1-small

This my final result of the Docker push in Google Cloud registry

latest: digest: sha256:3c73d0c25e65c39164258c384b34d2cab72303375c8d3f6a2e70930000b9e171 **size: 132946**
1

1 Answers

1
votes

You created a 1 node cluster with a relatively small machine type. By default, Google Container Engine runs cluster add-ons (logging & monitoring) that take up some of the resources in your cluster (you can disable these if you wish). It looks like you don't have enough remaining space in your cluster to launch a pod that requires this many resources. Try disabling the cluster add ons when you create the cluster or provisioning larger (or more) nodes.