Bug 1944679 - ovn-controller not ready due to error "ovs_list_is_empty(&f->list_node) failed in flood_remove_flows_for_sb_uuid"
Summary: ovn-controller not ready due to error "ovs_list_is_empty(&f->list_node) faile...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.8.0
Assignee: Tim Rozet
QA Contact: Ross Brattain
URL:
Whiteboard: UpdateRecommendationsBlocked
Depends On:
Blocks: 1945718
TreeView+ depends on / blocked
 
Reported: 2021-03-30 13:34 UTC by Christian Passarelli
Modified: 2024-06-14 01:04 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1945718 (view as bug list)
Environment:
Last Closed: 2021-04-01 18:20:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Christian Passarelli 2021-03-30 13:34:42 UTC
Description of problem:
OVN controller not ready on ovnkube-node pods, throwing the following error:
~~~
2021-03-29T14:15:24.918797367Z 2021-03-29T14:15:24Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
2021-03-29T14:15:24.920258767Z 2021-03-29T14:15:24Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
2021-03-29T14:17:47.892178610Z ovn-controller: 2021-03-29T14:17:47.892264515Z controller/ofctrl.c:1198: assertion ovs_list_is_empty(&f->list_node) failed in flood_remove_flows_for_sb_uuid()2021-03-29T14:17:47.892291742Z
~~~

Version-Release number of selected component (if applicable):
OCP 4.6.22

How reproducible:
Not sure.

Steps to Reproduce:
1.
2.
3.

Actual results:
ovnkube-node pods are constantly restarted.

Expected results:


Additional info:
To this issue already exist these bugs on OVN:
- https://bugzilla.redhat.com/show_bug.cgi?id=1929978
- https://bugzilla.redhat.com/show_bug.cgi?id=1928012

Comment 7 Ben Bennett 2021-04-01 18:20:59 UTC
This was fixed in 4.7.3

Comment 8 W. Trevor King 2021-04-01 19:04:44 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z.  The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way.  Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug.  When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label.  The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact?  Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90 seconds of API downtime
* example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it has always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 9 Ross Brattain 2021-04-01 19:50:01 UTC
Verified 4.8.0-0.nightly-2021-04-01-072432 has ovn2.13-20.12.0-25.el8fdp.x86_64 which should fix the issue as per https://bugzilla.redhat.com/show_bug.cgi?id=1929978#c3

No "failed in flood_remove_flows_for_sb_uuid" errors present in logs.

all ovnkube-node pods are healthy.

Comment 10 Tim Rozet 2021-04-01 21:05:00 UTC
Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
All customers using 4.6.22 and later 4.6 versions with OVN version 20.09.0-7.el8fdn. It is more likely to happen when using Network Policy.

What is the impact?  Is it serious enough to warrant blocking edges?
ovn-controller may crash continuously causing total outage on the affected nodes.

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
There is no remediation other than downgrading back to a previous version, or upgrading to newer 4.6 containing the proposed revert to an older OVN version.

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
Yes this is a regression. Versions prior to 4.6.22 are not affected.

Comment 11 zhaozhanqi 2021-04-02 08:03:16 UTC
@cpassare 

Hi, could you help list what's kind of network policy is using for customer?  since Tim said it is more likely to happen when using network policy , this issue cannot not reproduce on QE side until now using kind of network policy.  We want to enhance our test scenario if we missed, thanks.

Comment 14 W. Trevor King 2021-04-05 17:40:37 UTC
Setting UpdateRecommendationsBlocked, because we blocked * -> 4.6.22 and * -> 4.6.23 on this last week [1].

[1]: https://github.com/openshift/cincinnati-graph-data/pull/739

Comment 15 Lucas López Montero 2021-05-19 10:09:00 UTC
KCS article written: https://access.redhat.com/solutions/6055141.


Note You need to log in before you can comment on or make changes to this bug.