Description of problem: Director deployed OCP 3.11: openvswitch service is running on the baremetal node by default[1] when deploying with Director. There are a couple of issues with this: 1/ OCP tested integration matrix[2] states 3.11 was tested agains OVS 2.9 while we ship OVS 2.10 on the overcloud images that get used for the baremetal provisioning 2/ It doesn't follow the openshift-ansible default which run OVS inside containers. [1] https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/services/openshift-master.yaml#L174 [2] https://access.redhat.com/articles/2176281 Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates-9.0.1-0.20181013060891.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy OCP overcloud with Director 2. On any of the overcloud node run systemctl status openvswitch Actual results: openvswitch service is running on the node Expected results: openvswitch service should be disabled and running inside container Additional info:
Linux bridges and Linux bonds will allow every config you can do with OVS bridges/bonds/VLANs. References can be found here: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html-single/advanced_overcloud_customization/#network-interface-reference If a note in the docs requesting not to use OVS networking for the bare metal nodes is enough I'd suggest doing it that way. If at a technical level we are covered by Linux bridges, as it seems to be the case, you can close this BZ and ensure this is documented. Thanks!
No doc text required.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:0045