Description of problem: browbeat scenario fails in job DFG-perfscale-PerfCI-OSP16.2-dynamic-workloads-ovn always since some time. Version-Release number of selected component (if applicable): ovn-2021-21.12.0-11.el8fdp.x86_64 How reproducible: always in https://rhos-ci-jenkins.lab.eng.tlv2.redhat.com/view/DFG/view/perfscale/view/PerfCI/job/DFG-perfscale-PerfCI-OSP16.2-dynamic-workloads-ovn/ atleast since 1st March, before that there was some other failure. Last success for the job was seen on 30th Jan, 2022. This job runs browbeat's dynamic_workload_min[1] scenario's 5 iteration, and then ping to a vm fails in an iteration, the iteration aborts, and next iteration continues and then some vm pass and then one fails. The similar job is passing for 16.1, so seems some regression in 16.2. Steps to Reproduce: 1. can be reproduced with https://rhos-ci-jenkins.lab.eng.tlv2.redhat.com/view/DFG/view/perfscale/view/PerfCI/job/DFG-perfscale-PerfCI-custom/65/parameters/ 2. 3. Actual results: ping to vm fails randomly Expected results: all scenarios should succeed. Additional info: [1] https://opendev.org/x/browbeat/src/branch/master/rally/rally-plugins/dynamic-workloads/dynamic_workload_min.py#L70-L73
*** This bug has been marked as a duplicate of bug 2066413 ***
What we saw is that pinging from an external destination to a FIP (non DVR), the ICMP reply packets coming out from the VM were dropped in the integration bridge of the compute node. Triggering a recompute on ovn-controller fixed the issue. Please Yatin, can you upload the OVN databases to the BZ so that the core OVN team can try to reproduce it?
*** This bug has been marked as a duplicate of bug 2069783 ***
ovn-2021-21.12.0-82.el8fdp.x86_64 is now available in puddle RHOS-16.2-RHEL-8-20220902.n.1. Moving to modified