I have a Meteor app deployed using Kubernetes on Google Cloud, configured with Nginx acting as SSL termination. Everything working ok.
However, it appears that if two different clients connect to two different SSL containers, updates don't show up on the respective apps for up to 10 seconds, which makes it seem that Websockets isn't working, but polling is taking effect. I have confirmed that all clients are connected with Websockets, but since updates do not propagate immediately, perhaps Nginx isn't configured to correctly talk with the Meteor app.
Here's my SSL/Nginx service:
apiVersion: v1
kind: Service
metadata:
name: frontend-ssl
labels:
name: frontend-ssl
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
name: frontend-ssl
type: LoadBalancer
loadBalancerIP: 123.456.123.456
sessionAffinity: ClientIP
And here is the Meteor service:
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
ports:
- port: 3000
targetPort: 3000
selector:
name: flow-frontend
type: LoadBalancer
loadBalancerIP: 123.456.123.456
sessionAffinity: ClientIP
For SSL termination, I'm using the Kubernetes suggested SSL setup forked with Websockets additions https://github.com/markoshust/nginx-ssl-proxy
proxy_pass http://svcName.svcNamespace.svc.cluster.local:svcPort
in your config (assuming you have cluster DNS working and can resolve your service name with eg nslookup). Let me know how this goes. – Prashanth Bproxy_pass http://target_service:port
to route requests from nginx to meteor. – Mark Shust at M.academy