I have set up a single node K8S cluster using kubeadm by following the instructions here:
The cluster is up and all system pods are running fine:
[root@umeshworkstation hostpath-provisioner]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-n988r 1/1 Running 10 6h
calico-node-n1wmk 2/2 Running 10 6h
calico-policy-controller-1777954159-bd8rn 1/1 Running 0 6h
etcd-umeshworkstation 1/1 Running 1 6h
kube-apiserver-umeshworkstation 1/1 Running 1 6h
kube-controller-manager-umeshworkstation 1/1 Running 1 6h
kube-dns-3913472980-2ptjj 0/3 Pending 0 6h
kube-proxy-1d84l 1/1 Running 1 6h
kube-scheduler-umeshworkstation 1/1 Running 1 6h
I then downloaded Hostpath external provisioner code from kubernetes-incubator and built it locally on the same node. The docker image for provisioner built got successfully and I could even instantiate the provisioner pod using pod.yaml from same location. The pod is running fine:
[root@umeshworkstation hostpath-provisioner]# kubectl describe pod hostpath-provisioner
Name: hostpath-provisioner
Namespace: default
Node: umeshworkstation/172.17.24.123
Start Time: Tue, 09 May 2017 23:44:41 -0400
Labels: <none>
Annotations: <none>
Status: Running
IP: 192.168.8.65
Controllers: <none>
Containers:
hostpath-provisioner:
Container ID: docker://c600cfa7a2f5f958ad24e83372a1276a91b41cb67773b9605af4a0ae021ec914
Image: hostpath-provisioner:latest
Image ID: docker://sha256:f6def41ba7c096701c65bf0c0aba6ff31e030573e1a900e378432491ecc5c556
Port:
State: Running
Started: Tue, 09 May 2017 23:44:45 -0400
Ready: True
Restart Count: 0
Environment:
NODE_NAME: (v1:spec.nodeName)
Mounts:
/tmp/hostpath-provisioner from pv-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7wwvj (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
pv-volume:
Type: HostPath (bare host directory volume)
Path: /tmp/hostpath-provisioner
default-token-7wwvj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7wwvj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events: <none>
I then created the storage class as per the instructions of project home, and storage class is created fine:
[root@umeshworkstation hostpath-provisioner]# kubectl describe sc example-hostpath
Name: example-hostpath
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/hostpath
Parameters: <none>
Events: <none>
The next step was to create a PVC using claim.yaml from same location, but PVC is remaining in Pending state, and describe shows its not able to locate the provisioner example.com/hostpath:
[root@umeshworkstation hostpath-provisioner]# kubectl describe pvc
Name: hostpath
Namespace: default
StorageClass: example-hostpath
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath
volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2h 11s 874 persistentvolume-controller Normal ExternalProvisioning cannot find provisioner "example.com/hostpath", expecting that a volume for the claim is provisioned either manually or via external software
The PVC has remained forever in Pending state because of this.
Am I missing something?
kube-dns
system pod was in pending state due to insufficient CPU resource. I added more cpu and now thekube-dns
system pod is running fine, but the PVC continues to be in Pending state for same reason as before. – msbl3004E0511 10:46:32.442145 1 reflector.go:201] hostpath-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:294: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims)
– msbl3004