1
votes

According to https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters#pd "Once a persistent disk is provisioned, any Pods referencing the disk are scheduled to the same zone as the disk." but I tested, it is not so.

The procedure to create the disk:

gcloud compute disks create mongodb --size=1GB --zone=asia-east1-c
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
Created [https://www.googleapis.com/compute/v1/projects/ornate-ensign-234106/zones/asia-east1-c/disks/mongodb].
NAME     ZONE          SIZE_GB  TYPE         STATUS
mongodb  asia-east1-c  1        pd-standard  READY

New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

The cluster condition:

Name    Zone    Recommendation  In use by   Internal IP External IP Connect 
gke-kubia-default-pool-08dd2133-qbz6    asia-east1-a        k8s-ig--c4addd497b1e0a6d, gke-kubia-default-pool-08dd2133-grp   10.140.0.17 (nic0)  
35.201.224.238


gke-kubia-default-pool-183639fa-18vr    asia-east1-c        gke-kubia-default-pool-183639fa-grp, k8s-ig--c4addd497b1e0a6d   10.140.0.18 (nic0)  
35.229.152.12


gke-kubia-default-pool-42725220-43q8    asia-east1-b        gke-kubia-default-pool-42725220-grp, k8s-ig--c4addd497b1e0a6d   10.140.0.16 (nic0)  
34.80.225.6

The yaml to create the pod:

apiVersion: v1
kind: Pod
metadata:
  name: mongodb
spec:
  volumes:
  - name: mongodb-data
    gcePersistentDisk:
      pdName: mongodb
      fsType: ext4
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP

The pod is expected to be scheduled on gke-kubia-default-pool-183639fa-18vr, which is at the zone asia-east1-c. But:

C:\kube>kubectl get pod -o wide
NAME          READY   STATUS              RESTARTS   AGE    IP           NODE                                   NOMINATED NODE
fortune       2/2     Running             0          4h9m   10.56.3.5    gke-kubia-default-pool-42725220-43q8   <none>
kubia-4jmzg   1/1     Running             0          9d     10.56.1.6    gke-kubia-default-pool-183639fa-18vr   <none>
kubia-j2lnr   1/1     Running             0          9d     10.56.3.4    gke-kubia-default-pool-42725220-43q8   <none>
kubia-lrt9x   1/1     Running             0          9d     10.56.0.14   gke-kubia-default-pool-08dd2133-qbz6   <none>
mongodb       0/1     ContainerCreating   0          55s    <none>       gke-kubia-default-pool-42725220-43q8   <none>

C:\kube>kubectl describe pod mongodb
Name:               mongodb
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-kubia-default-pool-42725220-43q8/10.140.0.16
Start Time:         Thu, 20 Jun 2019 15:39:13 +0800
Labels:             <none>
Annotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container mongodb
Status:             Pending
IP:
Containers:
  mongodb:
    Container ID:
    Image:          mongo
    Image ID:
    Port:           27017/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /data/db from mongodb-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sd57s (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mongodb-data:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     mongodb
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
  default-token-sd57s:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sd57s
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason              Age                   From                                           Message
  ----     ------              ----                  ----                                           -------
  Normal   Scheduled           10m                   default-scheduler                              Successfully assigned default/mongodb to gke-kubia-default-pool-42725220-43q8
  Warning  FailedMount         106s (x4 over 8m36s)  kubelet, gke-kubia-default-pool-42725220-43q8  Unable to mount volumes for pod "mongodb_default(7fe9c096-932e-11e9-bb3d-42010a8c00de)": timeout expired waiting for volumes to attach or mount for pod "default"/"mongodb". list of unmounted volumes=[mongodb-data]. list of unattached volumes=[mongodb-data default-token-sd57s]
  Warning  FailedAttachVolume  9s (x13 over 10m)     attachdetach-controller                        AttachVolume.Attach failed for volume "mongodb-data" : GCE persistent disk not found: diskName="mongodb" zone="asia-east1-b"

C:\kube>

Anybody has known why?

1

1 Answers

0
votes

The problem here is that the pod is trying to provision on a node is asia-east1-b and the disk is not mounting because it is provisioned in asia-east1-c.

What you could do here is use nodeSelector which will add a label to your node and then you specify that label in your yaml for the pod. This way it will select the node in asia-east1-c and mount the disk.