2
votes

Using Replication controller when I schedule 2 (two) replicas of a pod I expect 1 (one) replica each in each Nodes (VMs). Instead I find both replicas are created in same pod. This will make 1 Node a single point of failure which I need to avoid.

For 2 Pods: 1 pod in Node A, 1 pod in Node B

For 3 Pods: 2 pod in Node A, 1 pod in Node B which kubernetes can schedule as per resource availability

Any suggestions on what is not correctly configured?

apiVersion: v1
kind: ReplicationController
metadata:
  name: myweb-rc
spec:
  replicas: 2
  selector:
    role: "myweb"
  template:
metadata:
  labels:
    role: "myweb"
spec:
  containers:
  - name: tomcat

    image: myregistry.my.com/dev/cert/my-web/myweb/deployment_build_app-671-354-1.0.0-snapshot
    ports:
      - name: tomcat
        containerPort: 8080
    readinessProbe:
        httpGet:
             path: /app
             port: 8080
        initialDelaySeconds: 30
        timeoutSeconds: 1
    resources:
        requests:
           cpu: 1000m
           memory: 100Mi
        limits:
           cpu: 2000m
           memory: 7629Mi
  imagePullSecrets:
    - name: myregistrykey
  nodeSelector:
      kubernetes.io/hostname: myapp01
1
Can you share the replication controller definition?kichik
@kichik - i have included rc definitionad-inf
Maybe that nodeSlecctor part?kichik

1 Answers

2
votes

Is it feasible that you have not labeled all your nodes with same key-value pairs?

You need to ensure that each node in which you want Kubernetes to schedule your pod is having same key-value pair label and configuration are similar, as Kubernetes will schedule only on those nodes labeled with kubernetes.io/hostname: myapp01 and configurations matches what is defined in replication controller.