5
votes

I'm trying to get going with Kubernetes DaemonSets and not having any luck at all. I've searched for a solution to no avail. I'm hoping someone here can help out.

First, I've seen this ticket. Restarting the controller manager doesn't appear to help. As you can see here, the other kube processes have all been started after the apiserver and the api server has '--runtime-config=extensions/v1beta1=true' set.

kube     31398     1  0 08:54 ?        00:00:37 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://dock-admin:2379 --address=0.0.0.0 --allow-privileged=false --portal_net=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota --runtime-config=extensions/v1beta1=true
kube     12976     1  0 09:49 ?        00:00:28 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --cloud-provider=
kube     29489     1  0 11:34 ?        00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080 

However api-versions only shows version 1:

$ kubectl api-versions
Available Server Api Versions: v1

Kubernetes version is 1.2:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}

The DaemonSet has been created, but appears to have no pods scheduled (status.desiredNumberScheduled).

$ kubectl get ds -o json
{
    "kind": "List",
    "apiVersion": "v1",
    "metadata": {},
    "items": [
        {
            "kind": "DaemonSet",
            "apiVersion": "extensions/v1beta1",
            "metadata": {
                "name": "ds-test",
                "namespace": "dvlp",
                "selfLink": "/apis/extensions/v1beta1/namespaces/dvlp/daemonsets/ds-test",
                "uid": "2d948b18-fa7b-11e5-8a55-00163e245587",
                "resourceVersion": "2657499",
                "generation": 1,
                "creationTimestamp": "2016-04-04T15:37:45Z",
                "labels": {
                    "app": "ds-test"
                }
            },
            "spec": {
                "selector": {
                    "app": "ds-test"
                },
                "template": {
                    "metadata": {
                        "creationTimestamp": null,
                        "labels": {
                            "app": "ds-test"
                        }
                    },
                    "spec": {
                        "containers": [
                            {
                                "name": "ds-test",
                                "image": "foo.vt.edu:1102/dbaa-app:v0.10-dvlp",
                                "ports": [
                                    {
                                        "containerPort": 8080,
                                        "protocol": "TCP"
                                    }
                                ],
                                "resources": {},
                                "terminationMessagePath": "/dev/termination-log",
                                "imagePullPolicy": "IfNotPresent"
                            }
                        ],
                        "restartPolicy": "Always",
                        "terminationGracePeriodSeconds": 30,
                        "dnsPolicy": "ClusterFirst",
                        "securityContext": {}
                    }
                }
            },
            "status": {
                "currentNumberScheduled": 0,
                "numberMisscheduled": 0,
                "desiredNumberScheduled": 0
            }
        }
    ]
}

Here is my yaml file to create the DaemonSet

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: ds-test
spec:
  selector:
    app: ds-test
  template:
    metadata:
      labels:
        app: ds-test
    spec:
      containers:
      - name: ds-test
        image: foo.vt.edu:1102/dbaa-app:v0.10-dvlp
        ports:
          - containerPort: 8080

Using that file to create the DaemonSet appears to work (I get 'daemonset "ds-test" created'), but no pods are created:

$ kubectl get pods -o json
{
    "kind": "List",
    "apiVersion": "v1",
    "metadata": {},
    "items": []
}
5
Can you run kubectl describe ds ds-test to find out more information about the daemonset you created? You may also want to check the kube-controller-manager log to see if (1) a daemon set controller was started at startup, (2) the ds-test was sync'd by the controller. - Yu-Ju Hong
$ kubectl describe ds ds-test Name: ds-test Image(s): foo.vt.edu:1102/dbaa-app:v0.10-dvlp Selector: app=ds-test Node-Selector: <none> Labels: app=ds-test Desired Number of Nodes Scheduled: 0 Current Number of Nodes Scheduled: 0 Number of Nodes Misscheduled: 0 Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed No events. - Richard Quintin
How many nodes does your cluster have and what size? Do you have enough capacity on your cluster to place the pod? - dward
We have two nodes in the cluster. But deleting all other pods doesn't seem to help. (I can start an RC just fine). So it's not a capacity problem. Thanks. - Richard Quintin

5 Answers

4
votes

(I would have posted this as a comment, if I had enough reputation)

I am confused by your output.

kubectl api-versions should print out extensions/v1beta1 if it is enabled on the server. Since it does not, it looks like extensions/v1beta1 is not enabled.

But kubectl get ds should fail if extensions/v1beta1 is not enabled. So I can not figure out if extensions/v1beta1 is enabled on your server or not.

Can you try GET masterIP/apis and see if extensions is listed there? You can also go to masterIP/apis/extensions/v1beta1 and see if daemonsets is listed there.

Also, I see kubectl version says 1.2, but then kubectl api-versions should not print out the string Available Server Api Versions (that string was removed in 1.1: https://github.com/kubernetes/kubernetes/pull/15796).

1
votes

I have this issue in my cluster (k8s version: 1.9.7):

enter image description here

Daemonset controlled by "Daemonset controller" not "Scheduler", So I restart the controller manager, the problem sloved:

enter image description here

But I think this is a issue of kubernetes, some relation info:

Bug 1469037 - Sometime daemonset DESIRED=0 even this matched node

v1.7.4 - Daemonset DESIRED 0 (for node-exporter) #51785

0
votes

I was facing a similar issue, then tried searching for the daemonset in the kube-system namespace, as mentioned here, https://github.com/kubernetes/kubernetes/issues/61342 I actually did get an output properly as well

0
votes

For any case that the current state of pods is not equal to desired state (whether it was created by a DaemonSet, ReplicaSet, Deployment etc') I would first check the Kubelet on the current node:

$ sudo systemctl status kubelet 

Or:

$ sudo journalctl -u kubelet

In many cases pods weren't created in my cluster because of errors like:

Couldn't parse as pod (Object 'Kind' is missing in 'null')

Which might occur after editing a resource's yaml in editor like vim.

-3
votes

Try:

$kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-

master node cannot accept pods.