Description of problem: When the application is deployed on a bare metal node and the egress hostsubnet is on a VM node, the namespace wide Egress traffic is blocked. The call to external endpoint fails as a result. Version-Release number of selected component (if applicable): v3.7.9 How reproducible: Always. Steps to Reproduce: 1. Build a cluster that has a mix of at least one bare metal node and one VMWare virtual node. For VMWare, do not use the VXLAN ESX plugin, instead use virtual switches. 2. Deploy a pod on the physical node and assign the VM node to be the egress host node. Please see instructions below: https://docs.openshift.com/container-platform/3.7/admin_guide/managing_networking.html#enabling-static-ips-for-external-project-traffic 3. Hit an external server (outside of OCP) from the deployed pod (via oc exec or oc rsh). Actual results: No traffic flow happens for egress. The TCP dump shows no VXLAN packets flowing to the egress IP on the VM node. Expected results: Egress traffic flows to the external server correctly. Additional info: - This issue does not occur if both nodes are deployed on VMs or both nodes are deployed as physical nodes.
This turned out to be totally unrelated to the physical-vs-VM thing, and was just the already-fixed "egress IPs break after restarting node service" bug. *** This bug has been marked as a duplicate of bug 1533153 ***