0
votes

I am attempting to launch a DaemonSet on an existing cluster of 6 nodes with multiple containers already deployed.

Deployment seems to succeed but no pods are created:

> ic describe ds
Name:       dd-agent
apiVersion: extensions/v1beta1
Image(s):   datadog/docker-dd-agent:kubernetes
Selector:   app=dd-agent,name=dd-agent,version=v1
Node-Selector:  <none>
Labels:     release=stable,tech=datadog,tier=backend
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Misscheduled: 0
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.
Setup

Deployment

AWS

We are running the example cluster created with kube-aws The existing cluster has 30 pods already running across 6 nodes.

  • CoreOS alpha (891.0.0)
  • Kubernetes server v1.1.2
  • Updated the /etc/kubernetes/manifest/kube-apiserver.manifest to enable DaemonSets by adding --runtime-config=extensions/v1beta1/daemonsets=true

On the kube-aws-controller I restarted services with: sudo systemctl daemon-reload sudo systemctl restart kubelet

1
Looks like daemonset controller isn't working properly. Please take a look at controller manager log in your master /var/log/kube-controller-manager.log to see if there's more error message for debugging. - janetkuo
In particular, can you verify that you see the message "Starting daemon set controller" in the controller manager log file? - Robert Bailey
{"log":"I0114 22:46:49.512820 1 controllermanager.go:332] Starting extensions/v1beta1 apis\n","stream":"stderr","time":"2016-01-14T22:46:49.512866018Z"} {"log":"I0114 22:46:49.512855 1 controllermanager.go:334] Starting horizontal pod controller.\n","stream":"stderr","time":"2016-01-14T22:46:49.512945663Z"} {"log":"I0114 22:46:49.512934 1 controllermanager.go:346] Starting job controller\n","stream":"stderr","time":"2016-01-14T22:46:49.513184427Z"} These where the only startup entries in the log file. @RobertBailey The where no related errors @janetkuo - c1freitas
The log doesn't mention starting the daemon set controller. Can you try restarting the controller manager after restarting the apiserver with daemonsets enabled? - Robert Bailey
I restarted the api-server and then the contoller-manager via docker restart and it worked. Thanks for the help. - c1freitas

1 Answers

3
votes

Restarting the Kubelet won't restart any of the pods being managed by the Kubelet. The controller manager will only manage DaemonSets if it notices that the feature is enabled in the apiserver, so you need to make sure that the apiserver is started with the flag to enable the alpha extensions and then start the controller manager.