RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1888348 - Bond port been deattached when attaching this bond to a new linux bridge
Summary: Bond port been deattached when attaching this bond to a new linux bridge
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: NetworkManager
Version: 8.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 8.4
Assignee: Beniamino Galvani
QA Contact: Vladimir Benes
URL:
Whiteboard:
Depends On:
Blocks: 1915832
TreeView+ depends on / blocked
 
Reported: 2020-10-14 16:05 UTC by Gris Ge
Modified: 2021-05-18 13:30 UTC (History)
11 users (show)

Fixed In Version: NetworkManager-1.30.0-0.6.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1915832 (view as bug list)
Environment:
Last Closed: 2021-05-18 13:29:43 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Reproduce script (1.25 KB, application/x-shellscript)
2020-10-14 16:05 UTC, Gris Ge
no flags Details
System logs with NM trace enabled (1.52 MB, text/plain)
2020-10-14 16:07 UTC, Gris Ge
no flags Details
Reproduce script (1.21 KB, application/x-shellscript)
2020-12-02 05:48 UTC, Gris Ge
no flags Details
System logs with NM trace enabled (537.88 KB, text/plain)
2020-12-02 05:49 UTC, Gris Ge
no flags Details

Description Gris Ge 2020-10-14 16:05:51 UTC
Created attachment 1721525 [details]
Reproduce script

Description of problem:

When attaching existing bond to new linux bridge, the bond port might be deattached.


Version-Release number of selected component (if applicable):
NetworkManager-1.26.0-8.el8.x86_64



How reproducible:
33% on RHV server only. Cannot reproduce locally.

Steps to Reproduce:
1. sudo ./bug.sh
2.
3.

Actual results:

Nmstate fail complaining about lossing bond ports.

Expected results:

No failure.

Additional info:

Comment 1 Gris Ge 2020-10-14 16:07:48 UTC
Created attachment 1721526 [details]
System logs with NM trace enabled

Comment 3 Thomas Haller 2020-10-15 07:22:56 UTC
> Nmstate fail complaining about lossing bond ports.

The logfile shows a lot of things happening, I don't see what is considered wrong there.

the last step of the reproducer script is one nmstatectl call, which "fails". But the bug report doesn't explain *why* it fails. What means "loosing bond ports"? Because at timestamp 1602686937.7269 we see that the bond ports are about to activate. 

Then, at timestamp "1602686937.9038" there is another activation request for bond0 (although bond0 is already about to be re-activated due to the bridge's connection.autoconnect-slave settinngs).

but even after that, the bond ports are again about to re-activate (e.g. at 1602686938.4919).


Until at 1602686938.5156, nmstate gives up and tears everthing down.


I think nmstate should simply have waited longer, and it would have reached the expected configuration. By "longer", I don't mean a long time. nmstate aborted after less than 2 seconds.

Also, the reproducer script (and afterwards the nmstate call) perform the activations in a manner that result in multiple activations. E.g. nmstate calling connection-activate for bond1 results in the dummy devices to be re-activated. Although, they were already activated. This is related to the email thread "bulk profile activation?". But regardless, I think nmstate just didn't wait long enough before giving up. Back to my earlier question: how did nmstate determine that something is wrong and abort?

Comment 4 Gris Ge 2020-10-19 12:47:39 UTC
(In reply to Thomas Haller from comment #3)
> > Nmstate fail complaining about lossing bond ports.
> 
> The logfile shows a lot of things happening, I don't see what is considered
> wrong there.
> 
> the last step of the reproducer script is one nmstatectl call, which
> "fails". But the bug report doesn't explain *why* it fails. What means
> "loosing bond ports"? Because at timestamp 1602686937.7269 we see that the
> bond ports are about to activate. 

Losing bond port means, dummy_98/_99 is not set as bond port but still exists
in `ip link`.

> 
> Then, at timestamp "1602686937.9038" there is another activation request for
> bond0 (although bond0 is already about to be re-activated due to the
> bridge's connection.autoconnect-slave settinngs).

I don't know how NM internally is doing.

Nmstate after activation on newly created bridge, then wait it finish.
Then activate the bond.

> 
> but even after that, the bond ports are again about to re-activate (e.g. at
> 1602686938.4919).
> 
> 
> Until at 1602686938.5156, nmstate gives up and tears everthing down.
> 
> 
> I think nmstate should simply have waited longer, and it would have reached
> the expected configuration. By "longer", I don't mean a long time. nmstate
> aborted after less than 2 seconds.

Actually, after profile been marked as activated by NM. Nmstate will verify
the state every seconds and retry 5 times.

So nmstate only fail after 5 seconds.

> 
> Also, the reproducer script (and afterwards the nmstate call) perform the
> activations in a manner that result in multiple activations. E.g. nmstate
> calling connection-activate for bond1 results in the dummy devices to be
> re-activated. Although, they were already activated. This is related to the
> email thread "bulk profile activation?". But regardless, I think nmstate
> just didn't wait long enough before giving up. Back to my earlier question:
> how did nmstate determine that something is wrong and abort?

Nmstate detemine something is wrong by verify the state after NM applied the 
configure. We retried 5 times with 1 second interval.

If you need a server to debug yourself, I can ask RHV to perpare one.

Comment 5 Gris Ge 2020-10-19 12:51:31 UTC
The log you see does not have verification state in nmstate as the reproduce script use `--no-verify`.

I tried to wait longer, the dummy interface is still not changed to be bond port.

Comment 6 Thomas Haller 2020-10-19 14:01:51 UTC
In the logfile from comment 1, nmstate only gives 1.3 seconds before aborting.

See:

[1602686937.3058] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=1819111 uid=0 result="success"
[1602686937.3131] audit: op="connection-update" uuid="fd5a3119-59de-4e1b-a623-205aeff957db" name="bond1" args="bond.options,connection.master,connection.lldp,connection.slave-type,ipv6.met>
[1602686937.3158] audit: op="connection-add" uuid="9fd8cab8-2a89-474f-9d37-918a9fde8afa" name="C18_nm_network2" pid=1819111 uid=0 result="success"
[1602686937.3447] audit: op="connection-activate" uuid="9fd8cab8-2a89-474f-9d37-918a9fde8afa" name="C18_nm_network2" pid=1819111 uid=0 result="success"
[1602686937.7329] audit: op="device-managed" arg="managed" pid=1819111 uid=0 result="success"
[1602686937.9038] audit: op="connection-activate" uuid="fd5a3119-59de-4e1b-a623-205aeff957db" name="bond1" pid=1819111 uid=0 result="success"
[1602686938.4954] audit: op="device-delete" interface="br-int" ifindex=9 pid=1819111 uid=0 result="success"
[1602686938.5156] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=1819111 uid=0 result="success"

nmstate was not waiting long enough. It is not clear whether the activation would have proceeded correctly, if nmstate had waited longer. But there is no reason to assume that it wouldn't.


> Actually, after profile been marked as activated by NM. Nmstate will verify
> the state every seconds and retry 5 times.
> 
> So nmstate only fail after 5 seconds.

Such a retry is not visible in the logfile from comment 1.


> I tried to wait longer, the dummy interface is still not changed to be bond port.

Is there a log of that?


Why did nmstate decide to abort at (around) timestamp 1602686938.5? At that point, the activation seemed right on track.

Comment 7 Gris Ge 2020-12-02 05:48:44 UTC
Created attachment 1735478 [details]
Reproduce script

Version reproducer works in VM.

Comment 8 Gris Ge 2020-12-02 05:49:30 UTC
Created attachment 1735479 [details]
System logs with NM trace enabled

New log

Comment 9 Gris Ge 2020-12-02 05:58:39 UTC
Hi Thomas,

I have reproduced this problem locally in my RHEL 8.3 VM.

Reproducer: for x in `seq 1 100`; do sudo ./bug.sh || break; done

The failure in new log is both `dummy_98` and `dummy_99` are not assigned to bond1.

The log shows both `dummy_98` and 99 been marked as activated before `bond1` been marked as activated.

I made some notes on this logs at http://people.redhat.com/fge/bz_1888348/nm_log_notes.md

The same problem is also reproducable on NetworkManager-1.30.0-0.2.el8.x86_64.

Comment 10 Gris Ge 2020-12-02 05:59:48 UTC
And in the new log, nmstate has waited 5 seconds(keep rechecking whether bond1 has dummy_98 and 99 as port) before rollback.

Comment 16 Antonio Cardace 2021-01-13 08:37:06 UTC
Hi Michael,

we would be able to backport this to 8.3.0.2, we still have a (small) time window to include this fix.

Would that be alright?

Comment 17 Michael Burman 2021-01-13 08:54:19 UTC
(In reply to Antonio Cardace from comment #16)
> Hi Michael,
> 
> we would be able to backport this to 8.3.0.2, we still have a (small) time
> window to include this fix.
> 
> Would that be alright?

Hi Antonio,
Yes, this will be great for us! Thank you very much.

Comment 23 Vladimir Benes 2021-02-03 11:39:55 UTC
Setting OtherQA and marking verified as RHV team was the only one that was able to verify the fix. Even VM with a lot of CPU load is not showing issue with reproducer suggested in comment #9.

Comment 26 errata-xmlrpc 2021-05-18 13:29:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: NetworkManager and libnma security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1574


Note You need to log in before you can comment on or make changes to this bug.