Basically kube-proxy component runs on each node to provide network features. It is run as a Kubernetes DaemonSet
and its configuration is stored on a Kubernetes ConfigMap
. You can edit the kube-proxy DaemonSet
or ConfigMap
on the kube-system
namespace using commands:
$ kubectl -n kube-system edit daemonset kube-proxy
or
$ kubectl -n kube-system edit configmap kube-proxy
kube-proxy
currently supports three different operation modes:
- User space: This mode gets its name because the service routing takes place in
kube-proxy
in the user process space
instead of in the kernel network stack. It is not commonly used as it is slow and outdated.
- IPVS (IP Virtual Server): Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This
kube-proxy
mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.
- iptables: This mode uses Linux kernel-level Netfilter rules to configure all routing for Kubernetes Services. This mode is the default for
kube-proxy
on most platforms. When load balancing for multiple backend pods, it uses unweighted round-robin scheduling.
- IPVS (IP Virtual Server): Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This
kube-proxy
mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.
Take a look: kube-proxy, kube-proxy-article, aks-kube-proxy.
Read also: proxies-in-kubernetes.