1
votes

Ive been been trying to get spark working on kubernetes on my local machine. However I`m having an issue trying to understand how the networking of services work.

I`m running kubernetes in containers on my laptop:

  • Etcd 2.0.5.1
  • Kubelet 1.1.2
  • Proxy 1.1.2
  • SkyDns 2015-03-11-001
  • Sky2kube 1.11

Then i`m launching spark which is located in the examples of the kubernetes github repo.

kubectl create -f kubernetes/examples/spark/spark-master-controller.yaml kubectl create -f kubernetes/examples/spark/spark-master-service.yaml kubectl create -f kubernetes/examples/spark/spark-webui.yaml
kubectl create -f kubernetes/examples/spark/spark-worker-controller.yaml kubectl create -f kubernetes/examples/spark/zeppelin-controller.yaml kubectl create -f kubernetes/examples/spark/zeppelin-service.yaml

My local network: 10.7.64.0/24 My docker network: 172.17.0.1/16

What works:

  • Spark master launches and I can connect to the webUI.
  • Spark worker tries to do dns query for spark-master and is successful. (it returns the correct service ip of the master)

What does not work:

  • Spark worker cannot connect to the service ip. there is no route to this host in that container nor on the local machine(laptop). Also I see nothing happening in iptables. It tries to connect to somewhere in the 10.0.0.0/8 network which i don`t have any routing too. Can someone shed a light on this ?

Details:

How I start the containers:

sudo docker run \ --net=host \ -d kubernetes/etcd:2.0.5.1 \ /usr/local/bin/etcd \ --addr=$(hostname -i):4001 \ --bind-addr=0.0.0.0:4001 \ --data-dir=/var/etcd/data

sudo docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube:v1.2.0 \ /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local

sudo docker run -d --net=host --privileged gcr.io/google-containers/hyperkube:v1.2.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local --cloud-provider=""

sudo docker run -d --net=host --restart=always \ gcr.io/google_containers/kube2sky:1.11 \ -v=10 -logtostderr=true -domain=kubernetes.local \ -etcd-server="http://127.0.0.1:4001"

sudo docker run -d --net=host --restart=always \ -e ETCD_MACHINES="http://127.0.0.1:4001" \ -e SKYDNS_DOMAIN="kubernetes.local" \ -e SKYDNS_ADDR="10.7.64.184:53" \ -e SKYDNS_NAMESERVERS="8.8.8.8:53,8.8.4.4:53" \ gcr.io/google_containers/skydns:2015-03-11-001

Thanks !

1

1 Answers

0
votes

I found what the issue was, the proxy was not running due to --cluster-dns and --cluster-domain not being parameters of the proxy. Now the iptables are created and the spark workers are able to connect to the service ip of the spark-master.