1
votes

I have one application on two environments, its been running for well over a year and now had to re-deploy it on one env and im left with half working external traffic.

example of working up

$ kubectl get ingress
NAME                            HOSTS                           ADDRESS        PORTS     AGE
my-app          prod-app.my.domain           <public ip e.g 41.30.20.20 .      80, 443   127d

and the not working one

MacBook-Pro% kubectl get ingress
NAME                            HOSTS                           ADDRESS                                           PORTS     AGE
my-app          dev-app.my.domain       <for some reason priv addresses not public that I assigned?>    10.223.0.76,10.223.0.80,10.223.0.81,10.223.0.99   80, 443   5m5s

the deployments works like so, in helm I have the deployments,service etc. + kubernetes ingress resource

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Values.deployment.name }}
  namespace: {{ .Values.deployment.env }}
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
<some other annotatioins>
spec:
  tls:
  - secretName: {{ .Values.ingress.tlsSecretName.Games }}
  rules:
  - host: [prod,dev]-app.my.domain
    http:
      paths:
      - path: /
        backend:
          serviceName: my-app
          servicePort: {{ .Values.service.port }}

and before it I deployed the stable/nginx-ingress helm chart (yup, i know there is ingress-nginx/ingress-nginx - will migrate to it soon, but first want to bring back the env)

and the simple nginx config

controller:
  name: main
  tag: "v0.41.2"

  config:
    log-format-upstream: ....
  replicaCount: 4

  service:
    externalTrafficPolicy: Local

  updateStrategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 25% #max number of Pods can be unavailable during the update
    type: RollingUpdate

  # We want to disperse pods into the whole cluster, on each data node
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              # app label is set in the main deployment manifest
              # https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-deployment.yaml#L6
              values:
              - nginx-ingress
            - key: release
              operator: In
              values:
              - my-app-ingress
          topologyKey: kubernetes.io/hostname

any idea why my kubernetes ingress has private addresses not the assigned public one ?

and my services on prod are

my-app                                            NodePort       10.190.173.152   <none>           8093:32519/TCP                  127d                   127d
my-app-ingress-stg-controller               LoadBalancer   10.190.180.54    <PUB_IP>     80:30111/TCP,443:30752/TCP      26d

and on dev

my-app                         NodePort       10.190.79.119   <none>           8093:30858/TCP                  10m
my-app-ingress-dev-main        LoadBalancer   10.190.93.104   <PUB_IP>    80:32027/TCP,443:30534/TCP      10m

I kinda see the problem (cause I already tried migrating to new nginx a month ago, and on dev there is still old, but there were issues with having multiple envs on same dev cluster with ingresses) - I guess ill try to migrate to the new one and see if that somehow fixes the issue - other than that any idea why the priv addresses ?


Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was

  1. Reserve public IP in gcp (my cloud provider)
  2. Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
  3. Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
  4. Deploy nginx-ingress pointing to that public address from the domain

if there is any mistake in my logic please say so and ill update it

2
Not only does this question belong on ServerFault.com (as it does not deal with programming), but you're going to have to tighten up the question because right now it's all over the place. It's also missing any troubleshooting information of what you have already tried in order to diagnose the error yourself. Also, that shell-glob business is unlikely to work in the host: field if for no other reason than it's not legal YAML to start a scalar with [.mdaniel
ye it was meant to show that it can be either prod-app.my.domain or dev-app.my.domain depending on values.yaml provided and I guess it does not deal with programming more of devops stuff - will try to provide more info in the posts in the future and give better contextpotatopotato

2 Answers

0
votes
  1. @potatopotato I have just moved you own answer from initial question to community wiki separate answer. In that case it will be more searchable and indexing in future searches.

  2. Explanation regarding below

Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long

As per official documentation:

Note: It might take a few minutes for GKE to allocate an external IP address and prepare the load balancer. You might get errors like HTTP 404 and HTTP 500 until the load balancer is ready to serve the traffic.

  1. your answer itself

Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was

  1. Reserve public IP in gcp (my cloud provider)
  2. Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
  3. Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
  4. Deploy nginx-ingress pointing to that public address from the domain
0
votes

This is not the right place to ask questions.