RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1755542 - [nmstate] Cannot create a bridge with dynamic ipv4 whose port is a vlaned bond
Summary: [nmstate] Cannot create a bridge with dynamic ipv4 whose port is a vlaned bond
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: nmstate
Version: 8.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.2
Assignee: Gris Ge
QA Contact: Mingyu Shi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-25 16:24 UTC by Miguel Duarte Barroso
Modified: 2022-10-12 07:23 UTC (History)
5 users (show)

Fixed In Version: nmstate-0.2.3-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 16:00:06 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
output of journalctl -u NetworkManager (301.23 KB, text/plain)
2019-09-25 16:33 UTC, Miguel Duarte Barroso
no flags Details
br0_over_vlan_of_bond.yml (764 bytes, text/plain)
2019-09-27 09:59 UTC, Gris Ge
no flags Details
dhcp_on_vlan.sh (728 bytes, application/x-shellscript)
2019-09-27 09:59 UTC, Gris Ge
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-37324 0 None None None 2022-10-12 07:23:06 UTC
Red Hat Product Errata RHBA-2020:1696 0 None None None 2020-04-28 16:00:28 UTC

Description Miguel Duarte Barroso 2019-09-25 16:24:02 UTC
Description of problem:
When attempting to create a bridge with dynamic IPV4, which has as port a VLAN of a bonded interface, the NetworkManager main loop hangs, and eventually times out.

Version-Release number of selected component (if applicable):
nmstate-0.0.8-15.el8


How reproducible:
always


Steps to Reproduce:
1. create a veth pair => ip link add veth1 type veth peer name veth2
2. set the veth ifaces up => ip link set veth1 up && ip link set veth2 up
3. Provision the following nmstate state file - via nmstatectl set <file_name>
---
interfaces:
  - name: bond0
    type: bond
    state: up
    link-aggregation:
      slaves:
        - veth1
      options:
        miimon: "100"
        xmit_hash_policy: "2"
      mode: 802.3ad
    ipv4:
      enabled: false
    ipv6:
      enabled: false
  - name: bond0.1
    type: vlan
    state: up
    vlan:
      id: 1
      base-iface: bond0
    ipv4:
      enabled: false
    ipv6:
      enabled: false
  - name: bridge0
    type: linux-bridge
    state: up
    bridge:
      port:
        - name: bond0.1
      options:
        stp:
          enabled: false
    ipv4:
      enabled: true
      dhcp: true
      auto-dns: false
      auto-gateway: false
      auto-routes: false
    ipv6:
      enabled: false
  - name: veth1
    state: up



Actual results:
2019-09-25 16:01:41,355 root         DEBUG    Checkpoint /org/freedesktop/NetworkManager/Checkpoint/3 created for all devices: 60
2019-09-25 16:01:41,355 root         DEBUG    Adding new interfaces: ['bond0.1', 'bond0', 'bridge0']
2019-09-25 16:01:41,357 root         DEBUG    Connection settings for ConnectionSetting.create:
id: bond0.1
iface: bond0.1
uuid: ee8e171e-8c7f-4770-8c05-d068d3a99171
type: vlan
autoconnect: True
autoconnect_slaves: <enum NM_SETTING_CONNECTION_AUTOCONNECT_SLAVES_YES of type NM.SettingConnectionAutoconnectSlaves>
2019-09-25 16:01:41,358 root         DEBUG    Connection settings for ConnectionSetting.create:
id: bond0
iface: bond0
uuid: 263c2eb8-4c38-4dd2-90e9-bde20f75f558
type: bond
autoconnect: True
autoconnect_slaves: <enum NM_SETTING_CONNECTION_AUTOCONNECT_SLAVES_YES of type NM.SettingConnectionAutoconnectSlaves>
2019-09-25 16:01:41,358 root         DEBUG    Connection settings for ConnectionSetting.create:
id: bridge0
iface: bridge0
uuid: 18fb926c-fcec-4878-9096-7796fbecd695
type: bridge
autoconnect: True
autoconnect_slaves: <enum NM_SETTING_CONNECTION_AUTOCONNECT_SLAVES_YES of type NM.SettingConnectionAutoconnectSlaves>
2019-09-25 16:01:41,359 root         DEBUG    Editing interfaces: ['eth1']
2019-09-25 16:01:41,359 root         DEBUG    Connection settings for ConnectionSetting.create:
id: eth1
iface: eth1
uuid: 35b4b890-c78b-4653-8f8b-466f1ee3d764
type: 802-3-ethernet
autoconnect: True
autoconnect_slaves: <enum NM_SETTING_CONNECTION_AUTOCONNECT_SLAVES_YES of type NM.SettingConnectionAutoconnectSlaves>
2019-09-25 16:01:41,362 root         DEBUG    Executing NM action: func=add_connection_async
2019-09-25 16:01:41,366 root         DEBUG    Connection adding succeeded: dev=bond0.1
2019-09-25 16:01:41,366 root         DEBUG    Executing NM action: func=add_connection_async
2019-09-25 16:01:41,370 root         DEBUG    Connection adding succeeded: dev=bond0
2019-09-25 16:01:41,370 root         DEBUG    Executing NM action: func=add_connection_async
2019-09-25 16:01:41,374 root         DEBUG    Connection adding succeeded: dev=bridge0
2019-09-25 16:01:41,374 root         DEBUG    Executing NM action: func=add_connection_async
2019-09-25 16:01:41,378 root         DEBUG    Connection adding succeeded: dev=eth1
2019-09-25 16:01:41,378 root         DEBUG    Executing NM action: func=safe_activate_async
2019-09-25 16:01:41,405 root         DEBUG    Connection activation initiated: dev=bridge0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>
2019-09-25 16:02:16,395 root         WARNING  NM main-loop timed out.
2019-09-25 16:02:16,437 root         DEBUG    Checkpoint /org/freedesktop/NetworkManager/Checkpoint/3 rollback executed: dbus.Dictionary({dbus.String('/org/freedesktop/NetworkManager/Devices/2'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/4'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/1'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/3'): dbus.UInt32(0)}, signature=dbus.Signature('su'))
Traceback (most recent call last):
  File "/usr/bin/nmstatectl", line 11, in <module>
    load_entry_point('nmstate==0.0.8', 'console_scripts', 'nmstatectl')()
  File "/usr/lib/python3.7/site-packages/nmstatectl/nmstatectl.py", line 62, in main
    return args.func(args)
  File "/usr/lib/python3.7/site-packages/nmstatectl/nmstatectl.py", line 220, in apply
    statedata, args.verify, args.commit, args.timeout
  File "/usr/lib/python3.7/site-packages/nmstatectl/nmstatectl.py", line 240, in apply_state
    checkpoint = libnmstate.apply(state, verify_change, commit, timeout)
  File "/usr/lib/python3.7/site-packages/libnmstate/netapplier.py", line 66, in apply
    state.State(desired_state), verify_change, commit, rollback_timeout
  File "/usr/lib/python3.7/site-packages/libnmstate/netapplier.py", line 146, in _apply_ifaces_state
    con_profiles=ifaces_add_configs + ifaces_edit_configs,
  File "/usr/lib64/python3.7/contextlib.py", line 119, in __exit__
    next(self.gen)
  File "/usr/lib/python3.7/site-packages/libnmstate/netapplier.py", line 210, in _setup_providers
    mainloop.error
libnmstate.error.NmstateLibnmError: Unexpected failure of libnm when running the mainloop: run timeout


Expected results:
...
Desired state applied: 
---
interfaces:
- name: bond0
  type: bond
  state: up
  ipv4:
    enabled: false
  ipv6:
    enabled: false
  link-aggregation:
    mode: 802.3ad
    options:
      miimon: '100'
      xmit_hash_policy: '2'
    slaves:
    - veth1
- name: bond0.1
  type: vlan
  state: up
  ipv4:
    enabled: false
  ipv6:
    enabled: false
  vlan:
    base-iface: bond0
    id: 1
- name: bridge0
  type: linux-bridge
  state: up
  bridge:
    options:
      stp:
        enabled: false
    port:
    - name: bond0.1
  ipv4:
    auto-dns: false
    auto-gateway: false
    auto-routes: false
    dhcp: true
    enabled: true
  ipv6:
    enabled: false
- name: veth1
  state: up


Additional info:

Comment 1 Miguel Duarte Barroso 2019-09-25 16:33:43 UTC
Created attachment 1619162 [details]
output of journalctl -u NetworkManager

NetworkManager logs

Comment 2 Gris Ge 2019-09-27 09:57:50 UTC
Hi Miguel,

I tried in RHEL 8.1, it works well.

I will attach the script I used for creating DHCP server on VLAN.

What I did:

```bash
sudo ./dhcp_on_vlan.sh
sudo nmstatectl set br0_over_vlan_of_bond.yml
```


From the log you provides, it seems you are using Fedora, can you try it on RHEL 8.1?

Comment 3 Gris Ge 2019-09-27 09:59:03 UTC
Created attachment 1619985 [details]
br0_over_vlan_of_bond.yml

Comment 4 Gris Ge 2019-09-27 09:59:28 UTC
Created attachment 1619986 [details]
dhcp_on_vlan.sh

Comment 5 Miguel Duarte Barroso 2019-09-30 07:52:53 UTC
(In reply to Gris Ge, comment #2)

> From the log you provides, it seems you are using Fedora, can you try it on RHEL 8.1?

I can confirm the issue *does not* reproduce on rhel8.1. 

Not sure what actually happened, but our QE faced this issue on a rhel8.1 host, a while back.

I'd like to get his opinion on it before closing the bug. Unfortunately he's currently on PTO.

Will update asap.

Comment 6 Michael Burman 2019-10-07 10:24:49 UTC
(In reply to Miguel Duarte Barroso from comment #5)
> (In reply to Gris Ge, comment #2)
> 
> > From the log you provides, it seems you are using Fedora, can you try it on RHEL 8.1?
> 
> I can confirm the issue *does not* reproduce on rhel8.1. 
> 
> Not sure what actually happened, but our QE faced this issue on a rhel8.1
> host, a while back.
> 
> I'd like to get his opinion on it before closing the bug. Unfortunately he's
> currently on PTO.
> 
> Will update asap.

QE reproduce it easily on rhel8.1
pls keep the bug

Comment 7 Gris Ge 2019-10-08 07:44:52 UTC
(In reply to Michael Burman from comment #6)
> (In reply to Miguel Duarte Barroso from comment #5)
> > (In reply to Gris Ge, comment #2)
> > 
> > > From the log you provides, it seems you are using Fedora, can you try it on RHEL 8.1?
> > 
> > I can confirm the issue *does not* reproduce on rhel8.1. 
> > 
> > Not sure what actually happened, but our QE faced this issue on a rhel8.1
> > host, a while back.
> > 
> > I'd like to get his opinion on it before closing the bug. Unfortunately he's
> > currently on PTO.
> > 
> > Will update asap.
> 
> QE reproduce it easily on rhel8.1
> pls keep the bug

Can you give me the contact information of QE?

I need to reproduce this issue locally for investigation.

Thanks.

Comment 8 Michael Burman 2019-10-08 08:25:55 UTC
(In reply to Gris Ge from comment #7)
> (In reply to Michael Burman from comment #6)
> > (In reply to Miguel Duarte Barroso from comment #5)
> > > (In reply to Gris Ge, comment #2)
> > > 
> > > > From the log you provides, it seems you are using Fedora, can you try it on RHEL 8.1?
> > > 
> > > I can confirm the issue *does not* reproduce on rhel8.1. 
> > > 
> > > Not sure what actually happened, but our QE faced this issue on a rhel8.1
> > > host, a while back.
> > > 
> > > I'd like to get his opinion on it before closing the bug. Unfortunately he's
> > > currently on PTO.
> > > 
> > > Will update asap.
> > 
> > QE reproduce it easily on rhel8.1
> > pls keep the bug
> 
> Can you give me the contact information of QE?
> 
> I need to reproduce this issue locally for investigation.
> 
> Thanks.

You can contact me.
pls note that the steps to reproduce the issue will be, creating a bond + attaching vlan tagged bridge + setting dhcpv4 on the vlan net at one shot. This is the scenario which failing.
The other scenario of adding the vlan net on an exiting bond is working. If you try to do all in once, it's explodes.

Comment 9 Gris Ge 2019-10-10 13:46:09 UTC
(In reply to Michael Burman from comment #8)
> (In reply to Gris Ge from comment #7)
> > (In reply to Michael Burman from comment #6)
> > > (In reply to Miguel Duarte Barroso from comment #5)
> > > > (In reply to Gris Ge, comment #2)
> > > > 
> > > > > From the log you provides, it seems you are using Fedora, can you try it on RHEL 8.1?
> > > > 
> > > > I can confirm the issue *does not* reproduce on rhel8.1. 
> > > > 
> > > > Not sure what actually happened, but our QE faced this issue on a rhel8.1
> > > > host, a while back.
> > > > 
> > > > I'd like to get his opinion on it before closing the bug. Unfortunately he's
> > > > currently on PTO.
> > > > 
> > > > Will update asap.
> > > 
> > > QE reproduce it easily on rhel8.1
> > > pls keep the bug
> > 
> > Can you give me the contact information of QE?
> > 
> > I need to reproduce this issue locally for investigation.
> > 
> > Thanks.
> 
> You can contact me.
> pls note that the steps to reproduce the issue will be, creating a bond +
> attaching vlan tagged bridge + setting dhcpv4 on the vlan net at one shot.
> This is the scenario which failing.
> The other scenario of adding the vlan net on an exiting bond is working. If
> you try to do all in once, it's explodes.


I will try and hopefully I can reproduce it during your vacation.

Comment 10 Gris Ge 2020-02-12 05:31:51 UTC
Hi Michael,

We have updated the nmstate with fixes to the activation process.
Can you try again with nmstate-0.2.3-1.el8 in latest build of RHEL 8.2?

Thank you.

Comment 11 Michael Burman 2020-02-12 09:22:15 UTC
(In reply to Gris Ge from comment #10)
> Hi Michael,
> 
> We have updated the nmstate with fixes to the activation process.
> Can you try again with nmstate-0.2.3-1.el8 in latest build of RHEL 8.2?
> 
> Thank you.

Hi Gris,
Issue doesn't reproduced with nmstate-0.2.3-1.el8.noarch
tnx

Comment 16 errata-xmlrpc 2020-04-28 16:00:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1696


Note You need to log in before you can comment on or make changes to this bug.