I'm using a nginx-ingress-controller on a bare-metal server. In order to get on the hosted sites from all nodes, I created it as a DaemonSet rather than a Deployment (Bare-metal considerations).
The solution works well and updates on the Ingress specifications are perfectly integrated.
For the sake of making a TS Server available, I changed my args for the Pods in nginx-ingress-controller.yml as mentioned by stacksonstacks:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-configuration
--publish-service=$(POD_NAMESPACE)/ingress-nginx
--annotations-prefix=nginx.ingress.kubernetes.io
--tcp-services-configmap=default/tcp-ingress-configmap
--udp-services-configmap=default/udp-ingress-configmap
Unfortunately, when applying the changed specification, the DaemonSet did not automatically recreate the Pods, so when inspecting the Pods, I still had the old args:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-configuration
--publish-service=$(POD_NAMESPACE)/ingress-nginx
--annotations-prefix=nginx.ingress.kubernetes.io
Deleting the Pods inside ingress-nginx namespace with kubectl --namespace ingress-nginx delete pod --all
made the controller create new Pods and finally the ports were available on the host-network.
I know the circumstances might be a bit different, but hopefully someone can save a few minutes with this.