2
votes

New to Kubernetes but want to quickly run some Docker containers on different machines, e.g, containers 1, 2 and 3 on node 1 (physical machine 1) and container 4, 5, and 6 on node 2 (physical machine 2). Can someone help me with the config files and commands to get it up and running, and all containers can communicate with each other?

I found the example in https://gettech1.wordpress.com/2016/10/03/kubernetes-forcefully-run-pod-on-specific-node/ close to what I want, but there is only one pod. How do I do it with two pods (assuming that I can add more containers in each pod) and run the two pods together in one deployment (so that containers are within the same network, therefore, can communicate with each other)?

I also want to run a Docker container with a bind mount with "shared" bind-propagation, how can I specify it?

Personally, I found the Kubernetes documentation a little hard to navigate with layers of concepts referencing each other. Anyone can point to a clean tutorial would be a help too. I'd like to learn how to run containers on multiple machines, then how to autoscale by adding more containers in a pod, adding more pods on a node and adding more nodes in a cluster. Then the different type of networking and volume management.

1
This is a good question in that it indicates some conceptual confusion. It sounds like in the question the term pod should be substituted for the term container. Pods are the unit of scheduling, not individual containers. There are ways containers can communicate with other containers- e.g. via Service names- even if they are not packaged together in the same pod. It looks like k8s only supports rshared bind-propagation, not shared. Some background: medium.com/kokster/kubernetes-mount-propagation-5306c36a4a2dJonah Benton
Maybe split the bind mount section out into a separate question?Matt
That's my confusion and a little clueless in front the Kubernetes documentations. My understanding is that a Pod can have multiple containers but does not cross nodes (physical machines). To get started, I just want to have a configuration with Pod1 (Container 1, 2, and 3) to run on Node1 and Pod2(Containers 4, 5 and 6) on Node2, and have have all these containers to be able to communicate each other, but I just could not find a simple example which shows how to do this. I can run these containers every easily with docker-compose on one machine and containers communicate each other fine.hanaZ
@hanaZ What kind of communication do containers 1, 2, 3 and 4, 5, 6 need to have with one another? Calling each over using http? Or something else?Jonah Benton
For example, container 1 is the management node which all other nodes need to register with. container 4 serves metadata, containers 2 and 5 serves storage and containers 3 and 6 are clients accessing the storages. Storage and clients get the metadata container IP address and port from management container and then make metadata queries. Clients also get the storage addresses from management container.hanaZ

1 Answers

2
votes

The simple way to assign Pods to Nodes is to use label selectors.

Labels and Selectors are a concept you will need to understand throughout Kubernetes.

First add labels to the nodes:

kubectl local nodes node-a podwants=somefeatureon-nodea
kubectl local nodes node-b podwants=somefeatureon-nodeb

A nodeSelector can then be set in the Pod definitions spec.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: my-app
spec:
  nodeSelector:
    podwants: somefeatureon-nodea
  container:
    - name: nginx
      image: nginx:1.8
      ports:
      - containerPort: 80

As a Pod will always be co-located in Kubernetes and containers in the Pod will all be able to access each other, Pod to Pod communication is done via exposing the Pod as a Service. Note the Service also uses a label selector to find it's Pods

kind: Service
apiVersion: v1
metadata:
  name: web-svc
spec:
  selector:
    app: my-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

Then you can discover the available Services in other Pods via environment variables or via DNS if you have added CoreDNS to your cluster.

 WEB_SVC_SERVICE_HOST=x.x.x.x
 WEB_SVC_SERVICE_PORT=80

You won't often define and schedule Pods themselves. You will probably use a Deployment that describes your Pods and will help you scale them.

Once you've got the simple case down the documentation follows on to describe Node affinity which allows you to define more complex rule sets. Even down to the level of making scheduling decisions based on what Pods are currently scheduled on the Node.