0
votes

I have setup a kubernetes cluster on 2 ubuntu VMS:

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
vm-hps10   Ready     master    33m       v1.10.1
vm-hps11   Ready     <none>    11s       v1.10.1

I have a image built locally called user-service so i start a service using the kubectl command:

$ kubectl run user-service --image=user-service --port=8080
deployment.apps "user-service" created

As soon as i do this i see lot of container spinning up in my worker node, i.e. when i do a docker ps -a (On worker) i see

CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS                              PORTS               NAMES
53de78d6ea71        k8s.gcr.io/pause-amd64:3.1   "/pause"                 1 second ago        Exited (0) Less than a second ago                       k8s_POD_user-service-6d9f9c9977-zdq9x_default_e4b92bf5-43ca-11e8-a03d-00155d0c662c_34
8a0b122e9ca9        k8s.gcr.io/pause-amd64:3.1   "/pause"                 2 seconds ago       Exited (0) 1 second ago                                 k8s_POD_user-service-6d9f9c9977-zdq9x_default_e4b92bf5-43ca-11e8-a03d-00155d0c662c_33
59e940adbff0        k8s.gcr.io/pause-amd64:3.1   "/pause"                 3 seconds ago       Exited (0) 2 seconds ago                                k8s_POD_user-service-6d9f9c9977-zdq9x_default_e4b92bf5-43ca-11e8-a03d-00155d0c662c_32
c0db383d7db8        k8s.gcr.io/pause-amd64:3.1   "/pause"                 4 seconds ago       Exited (0) 3 seconds ago                                k8s_POD_user-service-6d9f9c9977-zdq9x_default_e4b92bf5-43ca-11e8-a03d-00155d0c662c_31
c4c21c7a8e65        k8s.gcr.io/pause-amd64:3.1   "/pause"                 5 seconds ago       Exited (0) 4 seconds ago                                k8s_POD_user-service-6d9f9c9977-zdq9x_default_e4b92bf5-43ca-11e8-a03d-00155d0c662c_30
3dfcd0b39597        k8s.gcr.io/pause-amd64:3.1   "/pause"                 6 seconds ago       Exited (0) 5 seconds ago                                k8s_POD_user-service-6d9f9c9977-zdq9x_default_e4b92bf5-43ca-11e8-a03d-00155d0c662c_29
d6aa24274e7d        k8s.gcr.io/pause-amd64:3.1   "/pause"                 7 seconds ago       Exited (0) 6 seconds ago                                k8s_POD_user-service-6d9f9c9977-zdq9x_default_e4b92bf5-43ca-11e8-a03d-00155d0c662c_28

I have image available on both master and worker. I have used the below command for deploying a pod network:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Running out of ideas currently, any help would be highly appreciated.

P.S. Docker version:

$ docker -v
Docker version 17.03.0-ce, build 3a232c8

The POD:

$kubectl get pods
NAME                            READY     STATUS              RESTARTS   AGE
user-service-6d9f9c9977-wkqqp   0/1       ContainerCreating   0          10s

and

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS             RESTARTS   AGE
kube-system   etcd-vm-hps10                      1/1       Running            0          54m
kube-system   kube-apiserver-vm-hps10            1/1       Running            0          54m
kube-system   kube-controller-manager-vm-hps10   1/1       Running            0          55m
kube-system   kube-dns-86f4d74b45-n9vxs          3/3       Running            0          56m
kube-system   kube-flannel-ds-9nsww              0/1       CrashLoopBackOff   7          14m
kube-system   kube-flannel-ds-lfw8d              0/1       CrashLoopBackOff   15         54m
kube-system   kube-proxy-4v8vl                   1/1       Running            0          56m
kube-system   kube-proxy-5jpgn                   1/1       Running            0          14m
kube-system   kube-scheduler-vm-hps10            1/1       Running            0          54m

When i did kubectl logs -f kube-flannel-ds-4qzg2 -n kube-system kube-flannel i got

I0420 03:53:24.646578       1 main.go:353] Found network config - Backend type: vxlan
I0420 03:53:24.746971       1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
E0420 03:53:24.747296       1 main.go:280] Error registering network: failed to acquire lease: node "vm-hps10" pod cidr not assigned
1
Have you tried that app running on the container? it's look like you containers getting restart also flannel service getting restart as above. have you trace the logs ?Jogendra Kumar
you mean using standalone docker run command?vaibhav
Yes have you try ?Jogendra Kumar
Yes i did the image is actually a spring boot application which spins up nicely in a standalone docker container.vaibhav
If you do kubectl describe pod kube-flannel-ds-9nsww -n kube-system and see if gives you hint on why its crashing.bits

1 Answers

0
votes

Ok so i sorted this out, the problem here was with the kubadm init command i need to specify the CIDR block as a parameter, then when i start a service this works.