Description of problem: hostnetwork pod can access MCS port 22623 or 22624 on master. This was fixed for OpenShift SDN, OVN Kubernetes [1] and kuryr [2] by custom patches, but it was never solved on the system level. Due to this, third-party CNIs that have not implemented this blocking may leave these ports accessible. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1759338 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1856289 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 0.Deploy the cluster with a third-party CNI provider 1.Create a hostnetwork pod with kubeadmin as user $ oc login -u kubeadmin -p xxxxx $oc create -f https://raw.githubusercontent.com/anuragthehatter/v3-testfiles/master/networking/hostnetwork-pod.json 2.oc rsh into pod and curl on master IP on port 22623 and 22624 $ oc rsh hello-pod ~$curl -I http://10.0.129.26:22623/config/master -k HTTP/2 200 ~$curl -I http://10.0.129.26:22623/config/master -k HTTP/2 200 Actual results: Hostnetwork pod can access MCS ports Expected results: Hostnetwork pod should not access MCS ports Additional notes: We should fix this on the OpenShift side instead of leaving this responsibility to individual CNIs. See the iptables approach adopted by Kuryr [3] and OVN Kubernetes [4]. [3] https://github.com/openshift/cluster-network-operator/pull/698/files [4] https://github.com/openshift/ovn-kubernetes/pull/170/files
There are multiple conversations about this going on at the same time...
A similar issue was reported in this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1939772 A notable difference was that the MCS was exposed outside the cluster on a baremetal deployment.
*** Bug 1939772 has been marked as a duplicate of this bug. ***