3
votes

I have a container cluster running behind a load balancer on GKE. It's working well, but I occasionally get 502 errors when I try to access pages. The logs show the following:

{
metadata:
{
severity:
"WARNING"
projectId:
"###"
serviceName:
"network.googleapis.com"
zone:
"global"
labels:
{…}
timestamp:
"2016-04-28T16:35:46.864379896Z"
projectNumber:
"###"
}
insertId:
"2016-04-28|09:35:47.696726-07|10.94.35.131|1729057675"
log:
"requests"
httpRequest:
{
requestMethod:
"GET"
requestUrl:
"https://###/user/view/111"
requestSize:
"2089"
status:
502
responseSize:
"362"
userAgent:
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36"
remoteIp:
"###"
referer:
"###"
}
}

When I review the access logs from my containers, I do not see any matching requests hitting them at the times that the 502 errors are generated. It appears that they are not going past the load balancer.

Has anyone experienced this issue with Load Balancers? Any recommended solution? Thanks.

2
I should add, all my instances are showing healthy and have <20% cpu usage.user3113357
Can you add more info about the configuration of your containers and your load balancer?CJ Cullen
Is there some specific information that will help? I'm running two containers serving a python app via uwsgi. The containers are exposed via NodePort. Each container is running 4 uwsgi processes. The load balancer sends all traffic except requests to /static/ to these containers.user3113357
It appears that both of my containers were restarted when these errors are happening. I can tell from the uptime shown when I do docker ps from the container host. I haven't found anything in the container logs that gives any reason for the restart though.user3113357
That would explain the 502s. You might check the kubelet logs. It's possible the containers were failing healthchecks and kubelet restarted them.CJ Cullen

2 Answers

1
votes

A 502 error suggests that the load balancers are sending traffic but not receiving responses. Is it possible that your endpoints are reporting healthy, but some containers are not ready to serve?

1
votes

You might want to try moving to a different region or zone. I was having a similar issue with requests coming from the Americas while requests from EU/Asia were all successful. This made me think it was an issue with the backend service in us-central so I switched from us-central1-c to us-east and everything was OK after that.