Bug 2155991

Summary: VLAN device doesn't activate after reload
Product: Red Hat Enterprise Linux 9 Reporter: Andrea Panattoni <apanatto>
Component: NetworkManagerAssignee: Beniamino Galvani <bgalvani>
Status: CLOSED ERRATA QA Contact: Filip Pokryvka <fpokryvk>
Severity: low Docs Contact:
Priority: medium    
Version: 9.0CC: bgalvani, fpokryvk, lrintel, manrodri, rkhan, sdodson, sfaye, sukulkar, thaller, till, vbenes
Target Milestone: rcKeywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: NetworkManager-1.43.4-1.el9 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2159758 (view as bug list) Environment:
Last Closed: 2023-11-07 08:37:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2159758    
Attachments:
Description Flags
sosreport-worker-0 none

Description Andrea Panattoni 2022-12-23 09:57:56 UTC
Created attachment 1934243 [details]
sosreport-worker-0

Description of problem:

The device bond0.409 (VLAN over bonded interfaces) doesn't activate after issuing `nmcli connection reload`.

The problem can be inspected in the attached sosreport and journalctl:

~~~~~~~~~~~~~~
Dec 19 22:36:12 worker-0 configure-ovs.sh[6116]: + local 'connected_state=10 (unmanaged)'
Dec 19 22:36:12 worker-0 configure-ovs.sh[6116]: + [[ 10 (unmanaged) =~ disconnected ]]
Dec 19 22:36:12 worker-0 configure-ovs.sh[6116]: + echo 'Waiting for interface bond0.409 to activate...'
Dec 19 22:36:12 worker-0 configure-ovs.sh[6116]: Waiting for interface bond0.409 to activate...
Dec 19 22:36:12 worker-0 configure-ovs.sh[6116]: + timeout 60 bash -c 'while ! nmcli -g DEVICE,STATE c | grep "bond0.409:activated"; do sleep 5; done'
...
Dec 19 22:37:12 worker-0 configure-ovs.sh[6116]: + echo 'Warning: bond0.409 did not activate'
~~~~~~~~~~~~~~


Version-Release number of selected component (if applicable):

8.6

How reproducible:

The bug is likely to be reproduced if NM LOGLEVEL=info, not often with LOGLEVEL=debug, never reproduced with LOGLEVEL=trace.

Steps to Reproduce:
1. 
2.
3.

Actual results:

bond0.409 connection should activate as other connections (bond0) do.

Expected results:

bond0.409 connection does not activate

Additional info:

- The error appeared in OCP4.12rc5
- It works correctly in OCP4.10, which is based on RHEL 8.4
- The error occurs during the boot, in configure-ovs.sh [1]


[1] https://github.com/openshift/machine-config-operator/blob/release-4.12/templates/common/_base/files/configure-ovs-network.yaml#L366

Comment 1 Scott Dodson 2023-01-09 17:36:46 UTC
If this is root cause for the referenced OCPBUGS-3612[1], this represents a regression for all OCP clusters upgrading from 4.10 to 4.11 or 4.12. Since 4.11 has already shipped and 4.12 is imminently shipping we need this to be looked into urgently.

1 - https://issues.redhat.com/browse/OCPBUGS-3612

Comment 2 Beniamino Galvani 2023-01-24 08:22:31 UTC
Hi Andrea,

I'm working on a patch to fix this bug. Unfortunately, I can't reproduce the problem locally; would you test (or ask the customer to test) a scratch build once there is a proposed fix?

Comment 3 Andrea Panattoni 2023-01-24 09:03:32 UTC
Hi @bgalvani , thanks for the update.

@manrodri Do you think it's possible to test a scratch build (supposing an RPM) on the distributed CI you used for https://issues.redhat.com/browse/OCPBUGS-3612?

We should test the ovs-configure.sh script without the `touch /run/configure-ovs-boot-done` workaround.

Comment 4 Manuel Rodriguez 2023-01-24 14:23:43 UTC
Hi @apanatto,

In our Distributed CI we can install clusters from several releases (nightly builds, EC, RC) of OCP, but if I'm understanding correctly we'll need an OCP cluster running 4.12 without the work-around, then we'll upgrade an RPM package and I guess, reboot the nodes to test? Please let me know if that's the case, I can prepare a cluster.

Thanks,

Comment 5 Andrea Panattoni 2023-01-24 15:40:56 UTC
It would be better to install the RPM package before the OpenShift installation, but I suppose it's not that easy as it's all automated.

If you install the package later in the process, we need to be sure a simple reboot does not solve the problem. So the steps are slightly different:

1. Setup OCP cluster 4.12
2. Apply the MachineConfig that makes configure-ovs.sh fail
3. Reboot the node and check if it still fails
4. Install the RPM provided by Beniamino
5. Reboot the node
6. Check if it comes up without errors

@manrodri do you think it is feasible?

@bgalvani any feedback on the above process?

Comment 6 Beniamino Galvani 2023-01-24 16:06:23 UTC
The test procedure above looks ok.

Comment 7 Manuel Rodriguez 2023-01-24 16:08:20 UTC
@apanatto thanks for the details, that looks good to me, I've never installed an RPM in an OCP node, but I'm up for testing, so please let me know when you have an RPM available and I'll run the procedure.

Comment 13 Andrea Panattoni 2023-02-28 13:08:17 UTC
@manrodri did you have the chance to do more tests on this?
Although the nature of this problem is not so deterministic, can we say it improves the overall stability of the startup process?

Comment 17 sfaye 2023-03-01 08:21:28 UTC
*** Bug 2159758 has been marked as a duplicate of this bug. ***

Comment 25 errata-xmlrpc 2023-11-07 08:37:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (NetworkManager bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:6585