RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1439118 - NetworkManager wrongly manages veth devices [rhel-7.4-alpha only]
Summary: NetworkManager wrongly manages veth devices [rhel-7.4-alpha only]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: NetworkManager
Version: 7.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Thomas Haller
QA Contact: Desktop QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-04-05 09:32 UTC by Li Shuang
Modified: 2017-08-01 09:27 UTC (History)
10 users (show)

Fixed In Version: NetworkManager-1.8.0-0.4.rc2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 09:27:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:2299 0 normal SHIPPED_LIVE Moderate: NetworkManager and libnl3 security, bug fix and enhancement update 2017-08-01 12:40:28 UTC

Description Li Shuang 2017-04-05 09:32:16 UTC
Description of problem:
# ip link add type veth
# ip link
...
19: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 7a:ca:f9:30:95:80 brd ff:ff:ff:ff:ff:ff
20: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 52:f3:9c:57:66:78 brd ff:ff:ff:ff:ff:ff

# ip link set veth0 name veth_client
RTNETLINK answers: Device or resource busy


Version-Release number of selected component (if applicable):
RHEL-7.4-20170330.1 ==> kernel-3.10.0-632.el7
# rpm -q iproute
iproute-3.10.0-79.el7.x86_64


How reproducible:
always


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Phil Sutter 2017-04-05 14:24:05 UTC
Hi Shuang,

(In reply to Li Shuang from comment #0)
> 19: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP mode DEFAULT qlen 1000
>     link/ether 7a:ca:f9:30:95:80 brd ff:ff:ff:ff:ff:ff
> 20: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP mode DEFAULT qlen 1000
>     link/ether 52:f3:9c:57:66:78 brd ff:ff:ff:ff:ff:ff
> 
> # ip link set veth0 name veth_client
> RTNETLINK answers: Device or resource busy

You can only rename links which are down. Could you please try this:

# ip link set veth0 down; ip link set veth0 name veth_client

Thanks, Phil

Comment 4 Li Shuang 2017-04-07 03:01:08 UTC
(In reply to Phil Sutter from comment #3)
Hi Phil,

Sorry for being late and thanks for your help.
I noticed that the default status of new veth interfaces are up rather than down,
and do you think if it need some Docs to prompt this change?

Shuang

Comment 5 Phil Sutter 2017-04-07 09:03:04 UTC
Hi Shuang,

(In reply to Li Shuang from comment #4)
> (In reply to Phil Sutter from comment #3)
> Hi Phil,
> 
> Sorry for being late and thanks for your help.
> I noticed that the default status of new veth interfaces are up rather than
> down,
> and do you think if it need some Docs to prompt this change?

Yes, this should be documented.

I just checked the behaviour on a fresh RHEL7.4 machine from Beaker: It's NetworkManager which sets the interface up - if I disable it, the interfaces stay down. Also, on my RHEL7.3-based VM, I don't see the respective log entries and despite NetworkManager still active, the interfaces are not brought up.

I wasn't able to quickly point out a NetworkManager BZ which might have caused the change in behaviour, so I'm just reassigning this ticket to it.

Cheers, Phil

Comment 6 Thomas Haller 2017-04-19 08:36:28 UTC
For veth type devices that are created outside of NetworkManager, NetworkManager should treat them as unmanaged.

It does so due to /usr/lib/udev/rules.d/85-nm-unmanaged.rules which has:
ENV{ID_NET_DRIVER}=="veth", ENV{NM_UNMANAGED}="1"


Note that inside a container, no udev is available and consequently veth devices are not marked as unmanaged -- that is intended because in a container we want to manage veth device.


Anyway, it's unclear why the device would be managed or set up. I tested on a "fresh RHEL7.4 machine from Beaker", and NetworkManager did not manage veth and the new devices stayed down.



Please enable debug logging of NM [1], reproduce the issue and attach logfiles. Thanks.


[1] https://cgit.freedesktop.org/NetworkManager/NetworkManager/plain/contrib/fedora/rpm/NetworkManager.conf?id=d105a610d64e81d6a6f332ccea8ea7fd1fcdfcce

Comment 7 Li Shuang 2017-04-19 15:30:10 UTC
(In reply to Thomas Haller from comment #6)
Hi Thomas,

I did the same test on RHEL-7.4-20170418.n.2 ==> kernel-3.10.0-654.el7, and the issue is not occurred any more. (I found the issue on the old version RHEL-7.4-20170330.1 ==> kernel-3.10.0-632.el7)

On RHEL-7.4-20170330.1 ==> kernel-3.10.0-632.el7:
# rpm -q iproute
iproute-3.10.0-79.el7.x86_64
# rpm -q NetworkManager
NetworkManager-1.8.0-0.4.rc1.el7.x86_64

When NetworkManager is started, the new veth interfaces are up:
# ip link add type veth
# ip link
...
6: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 42:57:52:2b:88:04 brd ff:ff:ff:ff:ff:ff
7: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether aa:2c:65:c9:93:8b brd ff:ff:ff:ff:ff:ff
But when NetworkManager is stopped, the new veth interfaces are down:
# ip link add type veth
# ip link
...
8: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 16:a5:54:44:d2:85 brd ff:ff:ff:ff:ff:ff
9: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether f2:fd:d8:50:70:0f brd ff:ff:ff:ff:ff:ff

On RHEL-7.4-20170418.n.2 ==> kernel-3.10.0-654.el7:
# rpm -q iproute
iproute-3.10.0-81.el7.x86_64
# rpm -q NetworkManager

NetworkManager-1.8.0-0.4.rc2.el7.x86_64
The new veth interfaces are down even if NetworkManager is started:
# ip link add type veth
# ip link
...
5: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether aa:36:66:dc:0a:fb brd ff:ff:ff:ff:ff:ff
6: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether da:a0:aa:a7:70:4b brd ff:ff:ff:ff:ff:ff

It seems that the new version of NetworkManager has fixed the issue.

Thanks, Shuang

Comment 8 Thomas Haller 2017-04-24 16:56:07 UTC
This is caused due to a bug in NetworkManager.

Fixed upstream by commit https://cgit.freedesktop.org/NetworkManager/NetworkManager/commit/?id=d77449314a8251aaffed88755e1672c50568bdce


This only affected rhel-7.4-alpha.

The affected version were 1:1.8.0-0.4.rc1

Comment 10 errata-xmlrpc 2017-08-01 09:27:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2299


Note You need to log in before you can comment on or make changes to this bug.