Hide Forgot
The userspace kube-proxy implementation provided a feature where, if an endpoint did not successfully connect, kube-proxy would "roll over" to another endpoint. The iptables implementation of the service layer is now the default and, while it offers higher performance, it is incapable of providing this feature. * There are documentation implications because people may not understand that this change in implementations occurred, and may not understand why things are not behaving as they once were. * There is a documentation issue in that selecting the userspace implementation is in the installer but is undocumented. * Understanding the implications of using one mechanism versus the other need to be documented as well. kube-proxy acts like much more of a traditional load balancer than the iptables implementation because of the retry feature. * Understanding how all of this is related to probes (specifically liveness) as well as node timeouts/evictions/etc. is also important.
This is a documentation issue... we'll clarify the 3.2 docs to cover the choices, the pros and cons, and how to flip.
Notes: - filed a docs bug upstream: https://github.com/kubernetes/kubernetes.github.io/pull/401 - ansible var: openshift_node_proxy_mode, which can be 'iptables' (the default) or 'userspace'. (In theory this is a per-node configuration variable, but in practice you need to use the same value cluster-wide. So I think, in ansible terms, that means you'd set it in the [OSEv3:vars] section, not the [nodes] section? But I've never used ansible...) - node-config.yaml: proxyArguments: proxy-mode: - userspace
also, upstream (non-docs) bug: https://github.com/kubernetes/kubernetes/issues/24322
Filed https://github.com/openshift/openshift-docs/pull/1948