0
votes

My Problem: When I run

kubectl -n test scale --replicas=5 -f web-api-deployment.yaml
  1. It scales the deployment, one POD per node, even though the nodes have plenty of capacity, why doesnt it scale more than one POD per node
  2. At present only one POD per node gets port 443 access, what if i wanted to run three nginx pods on same node all hosting the same web app on 443 and wanted the load balancer to load balance between the 3 PODS on the same node?

Kubernetes Cluster: 3 Masters 5 worker nodes

AWS: Elastic loadbalancer points port 443 to each Kubernetes worker node

POD DEPLOYMENT:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: test
  name: WEB-API
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: WEB-API
    spec:
      containers:
        - name: WEB-API
          image: WEB-API:latest
          env:
            - name: NGINX_WORKER_PROCESSES
              value: "1"
            - name: KEEPALIVETIMEOUT
              value: "0"
            - name: NGINX_WORKER_CONNECTIONS
              value: "2048"
      resources:
        requests:
          cpu: 500m
          memory: 500Mi
      ports:
      - containerPort: 443
      volumeMounts:
        - name: config-volume
          mountPath: /opt/config/
        - name: aws-volume
          mountPath: /root/.aws

apiVersion: v1
kind: Service
metadata:
  namespace: prd
  name: WEB-API
  annotations:
    external-dns.alpha.kubernetes.io/hostname: someaddress
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:xxxxxxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxx
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  labels:
    app: WEB-API
spec:
  externalTrafficPolicy: Cluster
  ports:
    - name: https
      port: 443
      targetPort: 80
      protocol: TCP
  selector:
    app: WEB-API
  sessionAffinity: None
  type: LoadBalancer
1
Could you please provide the output of 'kubectl get pods --all-namespaces -o wide' command, so that I have a clear picture on your current cluster load.Nepomucen

1 Answers

0
votes
  1. There is no reason why it would not scale to more then one per node (unless you have so many nodes in cluster, then it will attempt to spread the workloads optimally, meaning in your case 1 pod per node with 5 replicas and 5 nodes in cluster). Do you have pods in "Pending" state ? If so, check their describe for info on why they were not scheduled. You can also cordon/drain nodes to see how the 5 pods will behave when there are less nodes available for scheduling.

  2. 443 binding is within pods networking namespace, so you can have as many pods listening on their own 443 at the same time as you like, there will be no port conflict as each of them has its separate localhost and pod IP.