We're trying to implement a traffic manager over the top of our Azure Kubernetes services so we can run a cluster in 2 regions (uk west and south) and balance across both regions.
The actual traffic manager seems to be working ok, but in the azure portal its showing as being degraded, and in the ingress controller logs on the k8 cluster i can see a request that looks like this
[18/Sep/2019:10:40:58 +0000] "GET / HTTP/1.1" 404 153 "-" "Azure Traffic Manager Endpoint Monitor" 407 0.000 [-]
So the traffic manager is firing of a request, its hitting the ingress controller but it obviously cant resolve that path so its returning a 404.
I've had a play about with the Custom host header setting to point them to a health check endpoint in on of the pods, it did kind of work for a bit but then it seemed to go back to doing a GET on / so it went into degraded again (yeah i know sounds odd).
Even if that worked i dont really want to have to point it at a specific pod endpoint in case that is really down for some reason. Is there something we can do in the ingress controller config to make it respond with a 200 so the traffic manager knows that its up?
Cheers