Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionRastislav Hepner
2016-04-27 12:54:11 UTC
Created attachment 1151368[details]
logs from reproducer
Description of problem:
Problematic system has following interfaces setup.
+-----------------+ +----------------+
| | | |
| eth0 | | eth1 |
+-------+---------+ +---------+------+
| |
| |
+----+---------------------------+---+
| |
| |
| bond1 |
+-------------+-------------+
|
|
|
+-------------+-------------+
| |
| |
| Vlan0 |
+-------------+-------------+
|
|
+-------------+-------------+
| |
| 10.10.10.2/24 |
| br1 |
+---------------------------+
After system boot everything seems to be fine and all devices come UP as expected. However once NetworkManager is restarted (systemctl restart NetworkManager) vlan0 device and bond1 will not come back up.
Version-Release number of selected component (if applicable):
I've tried this on 2 versions symptoms seems to be identical:
NetworkManager-1.0.6-29.el7_2.x86_64
NetworkManager-1.0.6-27.el7_2.x86_64
data are aquired while running 1.0.6-27
How reproducible:
It happens every time on configuration mentioned in description
Steps to Reproduce:
1. Setup network devices in configuration as shown in description. I've used nmtui.
2. Restart NetworkManager.
systemctl restart NetworkManager
Actual results: bond1 and vlan0 devices are down after restart.
Expected results: bond1 and vlan0 should come up after restart.
Additional info:
I'm attaching messages file containing logs generated while I was reproducing issue. I cut the file so it starts with system bootup (with problematic configuration already at place) Then after 5 min I've restarted NetworkManager.
I've attached also ip addr & ip link ... before/after NM restart.
If sosreport will be needed I can provide it.
Comment 7Beniamino Galvani
2016-04-28 15:21:42 UTC
I can reproduce this. There are some messages in logs suggesting that the connection matching logic isn't picking up the existing connections for vlan and bond:
Connection 'bond1' differs from candidate 'bond-bond1' in 802-3-ethernet.s390-options, 802-3-ethernet.s390-subchannels, 802-3-ethernet.s390-nettype
Connection 'vlan0' differs from candidate 'vlan-vlan0' in 802-3-ethernet.s390-options, 802-3-ethernet.s390-subchannels, 802-3-ethernet.s390-nettype
I'm investigating the issue.
Comment 9Beniamino Galvani
2016-04-28 19:29:15 UTC
The main issue in my tests is that when the bond is configured without
IP addresses, the interface is brought down after NM is stopped and not
activated on restart.
After assigning a static IP to the bond (nmcli con mod <con-name>
ipv4.method manual ipv4.address 172.31.255.255/32) the bond stays
up.
Which IP method did you set for the bond connection? Can you please
show the output of 'nmcli con show <con-name>'?
I tested this also with upstream git master of NM and the issue is
still reproducible there; probably we should improve how software
devices without IP configuration are handled upon restart.
Comment 10Rastislav Hepner
2016-04-29 06:57:13 UTC
Comment 11Rastislav Hepner
2016-04-29 07:00:29 UTC
Hi,
I've tried simulate customers setting so I've set following on bond connection:
IPv4 disabled
IPv6 ignore
I've attached contents all connection profiles from my reproducer
Comment 13Beniamino Galvani
2016-07-21 12:18:16 UTC
The scenario is similar to the one of bug 1333983 (vlan over bond/team
that is lost on restart) and there is already a patch there, so I'm
setting this a duplicate.
*** This bug has been marked as a duplicate of bug 1333983 ***
Created attachment 1151368 [details] logs from reproducer Description of problem: Problematic system has following interfaces setup. +-----------------+ +----------------+ | | | | | eth0 | | eth1 | +-------+---------+ +---------+------+ | | | | +----+---------------------------+---+ | | | | | bond1 | +-------------+-------------+ | | | +-------------+-------------+ | | | | | Vlan0 | +-------------+-------------+ | | +-------------+-------------+ | | | 10.10.10.2/24 | | br1 | +---------------------------+ After system boot everything seems to be fine and all devices come UP as expected. However once NetworkManager is restarted (systemctl restart NetworkManager) vlan0 device and bond1 will not come back up. Version-Release number of selected component (if applicable): I've tried this on 2 versions symptoms seems to be identical: NetworkManager-1.0.6-29.el7_2.x86_64 NetworkManager-1.0.6-27.el7_2.x86_64 data are aquired while running 1.0.6-27 How reproducible: It happens every time on configuration mentioned in description Steps to Reproduce: 1. Setup network devices in configuration as shown in description. I've used nmtui. 2. Restart NetworkManager. systemctl restart NetworkManager Actual results: bond1 and vlan0 devices are down after restart. Expected results: bond1 and vlan0 should come up after restart. Additional info: I'm attaching messages file containing logs generated while I was reproducing issue. I cut the file so it starts with system bootup (with problematic configuration already at place) Then after 5 min I've restarted NetworkManager. I've attached also ip addr & ip link ... before/after NM restart. If sosreport will be needed I can provide it.