Bug 1806623
| Summary: | [OVN][openvswitch 2.9->2.11 update, 13z8->13z11] Multiple port binding errors after overcloud update | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Roman Safronov <rsafrono> |
| Component: | python-networking-ovn | Assignee: | Jakub Libosvar <jlibosva> |
| Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Eran Kuris <ekuris> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 13.0 (Queens) | CC: | apevec, jlibosva, lhh, majopela, scohen |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-01-11 14:14:53 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Roman Safronov
2020-02-24 16:20:08 UTC
The reason binding fails is because the VMs' security groups try to use port groups while the maintenance task hasn't successfully migrated from the address sets:
2020-02-23 20:27:18.459 [c2] 37 DEBUG networking_ovn.common.maintenance [req-fe795db1-bb36-4e90-bfef-b35a9cff0f7b - - - - -] Maintenance task: Fixing resource a940c8ac-f657-45b5-a849-42730a6b41be (type: security_groups) at delete check_for_inconsistencies /usr/lib/python2.7/site-packages/networking_ovn/common/maintenance.py:325
2020-02-23 20:27:18.461 [c2] 37 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn command(idx=0): DelAddrSetCommand(if_exists=True, name=as_ip4_a940c8ac_f657_45b5_a849_42730a6b41be) do_commit /usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:84
2020-02-23 20:27:18.461 [c2] 37 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn command(idx=1): DelAddrSetCommand(if_exists=True, name=as_ip6_a940c8ac_f657_45b5_a849_42730a6b41be) do_commit /usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:84
2020-02-23 20:27:18.463 [c2] 37 ERROR ovsdbapp.backend.ovs_idl.transaction [-] OVSDB Error: The transaction failed because the IDL has been configured to require a database lock but didn't get it yet or has already lost it: RuntimeError: OVSDB Error: The transaction failed because the IDL has been configured to require a database lock but didn't get it yet or has already lost it
2020-02-23 20:27:18.463 [c2] 37 ERROR ovsdbapp.backend.ovs_idl.transaction [req-fe795db1-bb36-4e90-bfef-b35a9cff0f7b - - - - -] Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/connection.py", line 122, in run
txn.results.put(txn.do_commit())
File "/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 115, in do_commit
raise RuntimeError(msg)
RuntimeError: OVSDB Error: The transaction failed because the IDL has been configured to require a database lock but didn't get it yet or has already lost it
: RuntimeError: OVSDB Error: The transaction failed because the IDL has been configured to require a database lock but didn't get it yet or has already lost it
I need to investigate why the DB lock has been lost.
I have never found the reason why the lock got lost. Since 13 is ELS, I'm going to close it but feel free to re-open in case the problem is hit again. |