Bug 2121451

Summary: Profile with lower priority use over one with higher priority
Product: Red Hat Enterprise Linux 9 Reporter: Jaime Caamaño Ruiz <jcaamano>
Component: NetworkManagerAssignee: Wen Liang <wenliang>
Status: CLOSED MIGRATED QA Contact: Filip Pokryvka <fpokryvk>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 9.0CC: bgalvani, ferferna, fpokryvk, lrintel, rkhan, sfaye, sukulkar, thaller, till
Target Milestone: rcKeywords: MigratedToJIRA, Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: NetworkManager-1.43.10-1.el9 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-05 11:05:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
NM trace log none

Description Jaime Caamaño Ruiz 2022-08-25 13:42:37 UTC
Created attachment 1907563 [details]
NM trace log

Description of problem:

With a specific set of profiles, on boot, profiles with lower priority are used over profiles with higher priority. Bonds and OVS are involved and possibly a complicated auto-connection dependency graph, but the intended activation can be achieved manually without any issues so there is no reason NM could not achieve the same automatically on boot, so the perception form the user is that priority is being ignored.

On boot, no manual intervention, this is the status:

[root@fedora system-connections]# nmcli c show
NAME                             UUID                                  TYPE           DEVICE 
Wired connection 1               40b0b9db-c803-3e7f-abb2-39d8593e3485  ethernet       eth0   
ovs-if-br-ex                     4cff5623-604b-4cd0-9a5d-62915df9460c  ovs-interface  br-ex  
bond-slave-eth1                  a38b0380-f8a0-4098-af0e-ece8106d4cab  ethernet       eth1   
bond-slave-eth2                  7588cca3-1cb6-4213-9b90-956aa77821c2  ethernet       eth2   
br-ex                            cf097898-974d-4ef2-a804-917a04d16090  ovs-bridge     br-ex  
ovs-if-phys0                     1141ecb1-06e5-4e55-a224-228307d3adc6  bond           bond0  
ovs-port-br-ex                   43575ecf-f90a-464d-8843-9098a69dc599  ovs-port       br-ex  
ovs-port-phys0                   a117fb62-a338-4c42-b4a5-6f855cd82dbd  ovs-port       bond0  
bond-bond0                       a9732df4-4570-4f64-853f-f8b21d6c8a09  bond           --     
bond-slave-eth1-slave-ovs-clone  90e70007-17ac-496a-aae0-6f134e011314  ethernet       --     
bond-slave-eth2-slave-ovs-clone  7ca5b4cd-6d15-4693-a7df-1534782d9b3a  ethernet       -- 

bond-slave-eth1-slave-ovs-clone and bond-slave-eth2-slave-ovs-clone have higher priority than bond-slave-eth1 and bond-slave-eth2, can be manually activated without any issues and are expected to be activated after a reboot over bond-slave-eth1 and bond-slave-eth2.

Attached connections and NM trace log


Version-Release number of selected component (if applicable):
1.32.12-2.fc35


How reproducible:
Always

Comment 2 Jaime Caamaño Ruiz 2022-08-25 13:45:38 UTC
Adding Openshift machine-config-operator PR that hit this issue for a bit more of contextual info:
https://github.com/openshift/machine-config-operator/pull/3312

Comment 9 Filip Pokryvka 2023-07-17 10:50:30 UTC
Hi Jaime,

can you please check the patched build without workaround? I am unable to reproduce locally (creating profiles with the same configs and correct are active even on affected version).

Fernando states, that it is not that easy to reproduce, so it will be verified as sanity-only with high probably, so we would like to have the confirmation that it is fixed in affected environments.

Thank you!

Comment 10 Jaime Caamaño Ruiz 2023-07-25 11:18:35 UTC
(In reply to Filip Pokryvka from comment #9)
> Hi Jaime,
> 
> can you please check the patched build without workaround? I am unable to
> reproduce locally (creating profiles with the same configs and correct are
> active even on affected version).
> 
> Fernando states, that it is not that easy to reproduce, so it will be
> verified as sanity-only with high probably, so we would like to have the
> confirmation that it is fixed in affected environments.
> 
> Thank you!

I can reproduce with this vagrant box
https://download.fedoraproject.org/pub/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-Vagrant-35-1.2.x86_64.vagrant-libvirt.box

I can try with the patched build if someone lets me know how to install it.

Comment 16 RHEL Program Management 2023-09-05 11:04:25 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 17 RHEL Program Management 2023-09-05 11:05:00 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues.