Using security groups as the destination or source of a security rule on openstack is very resource intensive. This can lead to network traffic performance issues with openstack neutron. The degraded network traffic can lead to installation failure where the bootstrap process times out because pods can access resources through the openshift sdn internal network. For example, some pods are unable to succesfully resolv ip addresses because they can't reach the internal dns service of the cluster. Communication between pods is spotty and leads to cascade failures.
Using `remote_group_id` in the security rules is very inefficient, triggering a lot of computation by ovs agent to generate the flows and possibly exceeding the time allocated for flow generation. In such cases, especially in environments already under stress, masters nodes may be unable to communicate with worker nodes, leading the deployment to fail. We're seeing this behavior in MOC, the cloud we're using for our CI. The workaround is to use the more efficient remote_ip_prefix rather than remote_group_id when creating security rules. This was already done for openshift-ansible in the past: https://bugzilla.redhat.com/show_bug.cgi?id=1703947
A note for the verifier QE. This bug affects our CI. As a result, we can already prove the effectiveness of the patch: jobs are green again after the merge. We would still need your help for the usual regression / edge case testing. Thank you!
No failure detected on latest 4.5 nightly after patch is merged, and secgroup rules are fine. Marking as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409