1
votes

Following instructions in the book of "Kubernetes Cookbook", I create a docker cluster with one master and two nodes:

master:  198.11.175.18
  etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler  

minion:
  etcd, flannel, kubelet, kube-proxy
  minion1: 120.27.94.15 
  minion2: 114.215.142.7

OS version is:

[user1@iZu1ndxa4itZ ~]$ lsb_release  -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.2.1511 (Core)
Release:    7.2.1511
Codename:   Core
[user1@iZu1ndxa4itZ ~]$ uname -a
Linux iZu1ndxa4itZ 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015  x86_64 x86_64 x86_64 GNU/Linux

Kuberneters version is:

Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"ec7364b6e3b155e78086018aa644057edbe196e5", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"ec7364b6e3b155e78086018aa644057edbe196e5", GitTreeState:"clean"}

I can get the status of two nodes by issuing kubectl on Master.

[user1@iZu1ndxa4itZ ~]$ kubectl get nodes
NAME             STATUS    AGE
114.215.142.7    Ready     23m
120.27.94.15     Ready     14h

The components on Master work well:

 [user1@iZu1ndxa4itZ ~]$ kubectl get cs
 NAME                 STATUS    MESSAGE              ERROR
 scheduler            Healthy   ok
 controller-manager   Healthy   ok
 etcd-0               Healthy   {"health": "true"}

But after starting a nginx container, there is no Pods status.

[user1@iZu1ndxa4itZ ~]$ kubectl run --image=nginx nginx-test
deployment "nginx-test" created

[user1@iZu1ndxa4itZ ~]$ kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
my-first-nginx     2         0         0            0           20h
my-first-nginx01   1         0         0            0           20h
my-first-nginx02   1         0         0            0           19h
nginx-test         1         0         0            0           5h

[user1@iZu1ndxa4itZ ~]$ kubectl get pods

Any clue to diagnose the problem? Thanks.

BTW, I attempted to run two Docker containers manually in different nodes, the two containers can communicate with each other using ping.

Update 2016-08-19

Found clue from services log of kube-apiser and kube-controller-manager, the problem may be caused by incorrect secure configuration:

sudo service kube-apiserver status -l

    Aug 19 14:59:53 iZu1ndxa4itZ kube-apiserver[21393]: E0819 14:59:53.118954   21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.
    Aug 19 15:00:08 iZu1ndxa4itZ kube-apiserver[21393]: E0819 15:00:08.120253   21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.
    Aug 19 15:00:23 iZu1ndxa4itZ kube-apiserver[21393]: E0819 15:00:23.121345   21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.
    Aug 19 15:00:38 iZu1ndxa4itZ kube-apiserver[21393]: E0819 15:00:38.122638   21393 genericapiserver.go:716] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again.

sudo service kube-controller-manager status -l

    Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: E0819 15:01:52.138742   21415 replica_set.go:446] unable to create pods: pods "my-first-nginx02-1004561501-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
    Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: I0819 15:01:52.138799   21415 event.go:211] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"my-first-nginx02-1004561501", UID:"ba35be11-652a-11e6-88d2-00163e0017a3", APIVersion:"extensions", ResourceVersion:"120", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "my-first-nginx02-1004561501-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
    Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: E0819 15:01:52.144583   21415 replica_set.go:446] unable to create pods: pods "my-first-nginx-3671155609-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
    Aug 19 15:01:52 iZu1ndxa4itZ kube-controller-manager[21415]: I0819 15:01:52.144657   21415 event.go:211] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"my-first-nginx-3671155609", UID:"d6c8288c-6529-11e6-88d2-00163e0017a3", APIVersion:"extensions", ResourceVersion:"54", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "my-first-nginx-3671155609-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
    Aug 19 15:04:17 iZu1ndxa4itZ kube-controller-manager[21415]: I0819 15:04:17.149320   21415 event.go:211] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx-test-863723326", UID:"624ed0ea-65a2-11e6-88d2-00163e0017a3", APIVersion:"extensions", ResourceVersion:"12247", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx-test-863723326-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
    Aug 19 15:04:17 iZu1ndxa4itZ kube-controller-manager[21415]: E0819 15:04:17.148513   21415 replica_set.go:446] unable to create pods: pods "nginx-test-863723326-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service accoun
1
Can you post the output of kubectl get deployments. You can look into kube-scheduler.log, kube-apiserver.log, kube-controller-manager.log for errors.Rajiv
@Rajiv Thanks for your reply. the output of kubectl get deployments is posted.thinkhy
Did you find out any error in kube-scheduler.log, kube-apiserver.log, kube-controller-manager.log or kubelet.log ?Rajiv
No, there is no kube related log files under the directory of /var/log...thinkhy
What is the OS & version ?Rajiv

1 Answers

2
votes

Resolved the problem with following procedure:

    openssl genrsa -out /tmp/service_account.key 2048
    sudo cp /tmp/service_account.key /etc/kubernetes/service_account.key

    sudo vim /etc/kubernetes/apiserver
    KUBE_API_ARGS="--secure-port=0 --service-account-key-file=/etc/kubernetes/service_account.key"

    sudo service kube-apiserver restart

    sudo vim /etc/kubernetes/controller-manager
    KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/service_account.key"

    sudo service kube-controller-manager restart