I use the skydns-rc.yaml.base(/kubernetes-release-1.3/cluster/addons/dns/sky..) file to create the k8s dns service. but the kubedns container is always failed to created.
the edited element info showed bellow:
namespace : kube-system replaced by default
PILLAR__DNS__REPLICAS replaced by 1
--domain=PILLAR__DNS__DOMAIN. replaced by --domain=cluster.local
PILLAR__FEDERATIONS__DOMAIN__MAP deleted
the whole skydns-rc.yaml.base template file showed bellow:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v18
**namespace: kube-system**
labels:
k8s-app: kube-dns
version: v18
kubernetes.io/cluster-service: "true"
spec:
replicas: **\_\_PILLAR\_\_DNS\_\_REPLICAS\_\_**
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
labels:
k8s-app: kube-dns
version: v18
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.6
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain=**\_\_PILLAR\_\_DNS\_\_DOMAIN\_\_.**
- --dns-port=10053
**\_\_PILLAR\_\_FEDERATIONS\_\_DOMAIN\_\_MAP\_\_**
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.**\_\_PILLAR\_\_DNS\_\_DOMAIN\_\_** 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.**\_\_PILLAR\_\_DNS\_\_DOMAIN\_\_** 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.
other info:
the cluster service ip range is 10.254.0.0/16
the domain is cluster.local
namespace is default
After execute statement kubectl describe pod kube-dns-v18 result showed bellow
Name: kube-dns-v18-u7jgt
Namespace: default
Node: centos-cjw-minion1/10.139.4.195
Start Time: Mon, 18 Jul 2016 19:31:48 +0800
Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v18
Status: Running
IP: 172.17.0.4
Controllers: ReplicationController/kube-dns-v18
Containers:
kubedns:
Container ID: docker://5f97e1d7185e327ac3cd5415c79b1b51da1987d8946fb243ee1758cdc4d53d29
Image: iaasfree/kubedns-amd64:1.5
Image ID: docker://sha256:a1490b272781a9921ba216778e741943e9b866114dae7e7e8980daebbc5ba7ed
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local.
--dns-port=10053
QoS Tier:
memory: Burstable
cpu: Guaranteed
Limits:
cpu: 100m
memory: 200Mi
Requests:
cpu: 100m
memory: 100Mi
State: Running
Started: Mon, 18 Jul 2016 19:36:02 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 18 Jul 2016 19:34:52 +0800
Finished: Mon, 18 Jul 2016 19:35:59 +0800
Ready: False
Restart Count: 3
Liveness: http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
Environment Variables:
dnsmasq:
Container ID: docker://75ef5bc18dfe196438956c42f64a2e2d6fd408329408704f32534ce7b9252663
Image: iaasfree/kube-dnsmasq-amd64:1.3
Image ID: docker://sha256:8cb0646c9e984cf510ca70704154bee2f2c51cfb2e776f4357c52c1d17c2b741
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Mon, 18 Jul 2016 19:31:55 +0800
Ready: True
Restart Count: 0
Environment Variables:
healthz:
Container ID: docker://e11626508ecd5b2cfae3e1eaa3284d75dae4160c113d7f28ce97cbd0185f032d
Image: iaasfree/exechealthz-amd64:1.0
Image ID: docker://sha256:f3b98b5b347af3254c82e3a0090cd324daf703970f3bb62ba8005020ddf5a156
Port: 8080/TCP
Args:
-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
-port=8080
-quiet
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
memory: 20Mi
cpu: 10m
Requests:
cpu: 10m
memory: 20Mi
State: Running
Started: Mon, 18 Jul 2016 19:32:12 +0800
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned kube-dns-v18-u7jgt to centos-cjw-minion1
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Created Created container with docker id 5814904f6e09
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Pulled Container image "iaasfree/kube-dnsmasq-amd64:1.3" already present on machine
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Started Started container with docker id 5814904f6e09
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Created Created container with docker id 75ef5bc18dfe
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Started Started container with docker id 75ef5bc18dfe
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Pulled Container image "iaasfree/exechealthz-amd64:1.0" already present on machine
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Created Created container with docker id e11626508ecd
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Started Started container with docker id e11626508ecd
3m 3m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Killing Killing container with docker id 5814904f6e09: pod "kube-dns-v18-u7jgt_default(370b6791-4cdb-11e6-80f0-fa163ebb45ec)" container "kubedns" is unhealthy, it will be killed and re-created.
</code>
After execu statment “kubectl logs kube-dns-v18-yhk41 -c kubedns” showed bellow:
<code>I0719 06:43:41.335795 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: serializer for text/html; charset=utf-8 doesn't exist. Sleeping 1s before retrying.
E0719 06:43:41.335928 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: serializer for text/html; charset=utf-8 doesn't exist
E0719 06:43:41.533705 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: serializer for text/html; charset=utf-8 doesn't exist
I0719 06:43:41.534017 1 dns.go:439] Received DNS Request:kubernetes.default.svc.cluster.local., exact:false
I0719 06:43:41.534048 1 dns.go:539] records:[], retval:[], path:[local cluster svc default kubernetes]
I0719 06:43:42.336756 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: serializer for text/html; charset=utf-8 doesn't exist. Sleeping 1s before retrying.
E0719 06:43:42.336893 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: serializer for text/html; charset=utf-8 doesn't exist
E0719 06:43:42.534553 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: serializer for text/html; charset=utf-8 doesn't exist