Using Replication controller when I schedule 2 (two) replicas of a pod I expect 1 (one) replica each in each Nodes (VMs). Instead I find both replicas are created in same pod. This will make 1 Node a single point of failure which I need to avoid.
For 2 Pods: 1 pod in Node A, 1 pod in Node B
For 3 Pods: 2 pod in Node A, 1 pod in Node B which kubernetes can schedule as per resource availability
Any suggestions on what is not correctly configured?
apiVersion: v1
kind: ReplicationController
metadata:
name: myweb-rc
spec:
replicas: 2
selector:
role: "myweb"
template:
metadata:
labels:
role: "myweb"
spec:
containers:
- name: tomcat
image: myregistry.my.com/dev/cert/my-web/myweb/deployment_build_app-671-354-1.0.0-snapshot
ports:
- name: tomcat
containerPort: 8080
readinessProbe:
httpGet:
path: /app
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
resources:
requests:
cpu: 1000m
memory: 100Mi
limits:
cpu: 2000m
memory: 7629Mi
imagePullSecrets:
- name: myregistrykey
nodeSelector:
kubernetes.io/hostname: myapp01