Bug 1812786
| Summary: | Fail to create bond with 'active_slave' and 'updelay' both been set | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Mingyu Shi <mshi> | ||||||
| Component: | nmstate | Assignee: | Gris Ge <fge> | ||||||
| Status: | CLOSED ERRATA | QA Contact: | Mingyu Shi <mshi> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | medium | ||||||||
| Version: | 8.2 | CC: | ferferna, fge, jiji, jishi, network-qe, till | ||||||
| Target Milestone: | rc | Keywords: | Triaged | ||||||
| Target Release: | 8.4 | Flags: | pm-rhel:
mirror+
|
||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | nmstate-1.0.0-0.1.el8 | Doc Type: | If docs needed, set a value | ||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2021-05-18 15:16:54 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | 1856640 | ||||||||
| Bug Blocks: | |||||||||
| Attachments: |
|
||||||||
Hi Gris, it looks not work, can you check it?
Tested with versions:
nmstate-0.3.3-2.el8.noarch
NetworkManager-1.26.0-0.2.1.el8.x86_64
# cat active1.yaml
interfaces:
- name: bond-mode5
type: bond
state: up
ipv4:
address:
- ip: 192.168.4.2
prefix-length: 24
dhcp: false
enabled: true
ipv6:
enabled: true
link-aggregation:
mode: balance-tlb
slaves:
- veth1
- veth0
options:
active_slave: veth1
miimon: 500
updelay: 1000
[16:22:32@hpe-dl380pgen8-02-vm-11 ~]0# nmstatectl set active1.yaml
2020-07-10 16:22:39,655 root DEBUG Async action: Create checkpoint started
2020-07-10 16:22:39,660 root DEBUG Checkpoint None created for all devices
2020-07-10 16:22:39,660 root DEBUG Async action: Create checkpoint finished
2020-07-10 16:22:39,662 root DEBUG Async action: Update profile: veth0 started
2020-07-10 16:22:39,663 root DEBUG Async action: Update profile: veth1 started
2020-07-10 16:22:39,664 root DEBUG Async action: Add profile: bond-mode5 started
2020-07-10 16:22:39,666 root DEBUG Async action: Update profile: veth0 finished
2020-07-10 16:22:39,668 root DEBUG Async action: Update profile: veth1 finished
2020-07-10 16:22:39,672 root DEBUG Async action: Add profile: bond-mode5 finished
2020-07-10 16:22:39,672 root DEBUG Async action: Activate profile: bond-mode5 started
2020-07-10 16:22:39,694 root DEBUG Connection activation initiated: dev=bond-mode5, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>
2020-07-10 16:22:39,723 root DEBUG Connection activation succeeded: dev=bond-mode5, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_IP_CONFIG of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_MASTER | NM_ACTIVATION_STATE_FLAG_LAYER2_READY of type NM.ActivationStateFlags>
2020-07-10 16:22:39,723 root DEBUG Async action: Activate profile: bond-mode5 finished
2020-07-10 16:22:39,755 root DEBUG Async action: Activate profile: veth0 started
2020-07-10 16:22:39,782 root DEBUG Async action: Activate profile: veth1 started
2020-07-10 16:22:39,785 root DEBUG Connection activation initiated: dev=veth0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>
2020-07-10 16:22:39,815 root DEBUG Connection activation initiated: dev=veth1, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>
2020-07-10 16:22:39,845 root DEBUG Connection activation succeeded: dev=veth0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_SLAVE | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags>
2020-07-10 16:22:39,845 root DEBUG Async action: Activate profile: veth0 finished
2020-07-10 16:22:39,848 root DEBUG Connection activation succeeded: dev=veth1, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_SLAVE | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags>
2020-07-10 16:22:39,849 root DEBUG Async action: Activate profile: veth1 finished
2020-07-10 16:22:45,108 root DEBUG Async action: Rollback to checkpoint /org/freedesktop/NetworkManager/Checkpoint/86 started
2020-07-10 16:22:45,144 root DEBUG Checkpoint /org/freedesktop/NetworkManager/Checkpoint/86 rollback executed
2020-07-10 16:22:45,144 root DEBUG Interface veth0_ep rollback succeeded
2020-07-10 16:22:45,144 root DEBUG Interface veth1_ep rollback succeeded
2020-07-10 16:22:45,145 root DEBUG Interface lo rollback succeeded
2020-07-10 16:22:45,145 root DEBUG Interface ens3 rollback succeeded
2020-07-10 16:22:45,145 root DEBUG Async action: Waiting for rolling back veth1 started
2020-07-10 16:22:45,145 root DEBUG Interface veth1 rollback succeeded
2020-07-10 16:22:45,145 root DEBUG Async action: Waiting for rolling back veth0 started
2020-07-10 16:22:45,145 root DEBUG Interface veth0 rollback succeeded
2020-07-10 16:22:45,145 root DEBUG Async action: Rollback to checkpoint /org/freedesktop/NetworkManager/Checkpoint/86 finished
2020-07-10 16:22:45,150 root DEBUG Active connection of device None has been replaced
2020-07-10 16:22:45,179 root DEBUG Active connection of device None has been replaced
2020-07-10 16:22:45,196 root DEBUG Connection activation succeeded: dev=veth1, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags>
2020-07-10 16:22:45,196 root DEBUG Async action: Waiting for rolling back veth1 finished
2020-07-10 16:22:45,204 root DEBUG Connection activation succeeded: dev=veth0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags>
2020-07-10 16:22:45,204 root DEBUG Async action: Waiting for rolling back veth0 finished
Traceback (most recent call last):
File "/usr/bin/nmstatectl", line 11, in <module>
load_entry_point('nmstate==0.3.3', 'console_scripts', 'nmstatectl')()
File "/usr/lib/python3.6/site-packages/nmstatectl/nmstatectl.py", line 67, in main
return args.func(args)
File "/usr/lib/python3.6/site-packages/nmstatectl/nmstatectl.py", line 256, in apply
args.save_to_disk,
File "/usr/lib/python3.6/site-packages/nmstatectl/nmstatectl.py", line 289, in apply_state
save_to_disk=save_to_disk,
File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 71, in apply
_apply_ifaces_state(plugins, net_state, verify_change, save_to_disk)
File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 115, in _apply_ifaces_state
_verify_change(plugins, net_state)
File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 120, in _verify_change
net_state.verify(current_state)
File "/usr/lib/python3.6/site-packages/libnmstate/net_state.py", line 63, in verify
self._ifaces.verify(current_state.get(Interface.KEY))
File "/usr/lib/python3.6/site-packages/libnmstate/ifaces/ifaces.py", line 308, in verify
cur_iface.state_for_verify(),
libnmstate.error.NmstateVerificationError:
desired
=======
---
name: bond-mode5
type: bond
state: up
ipv4:
enabled: true
address:
- ip: 192.168.4.2
prefix-length: 24
dhcp: false
ipv6:
enabled: true
link-aggregation:
mode: balance-tlb
options:
active_slave: veth1
miimon: 500
updelay: 1000
slaves:
- veth0
- veth1
current
=======
---
name: bond-mode5
type: bond
state: up
ipv4:
enabled: true
address:
- ip: 192.168.4.2
prefix-length: 24
dhcp: false
ipv6:
enabled: true
address: []
autoconf: false
dhcp: false
link-aggregation:
mode: balance-tlb
options:
active_slave: veth0
arp_interval: 0
arp_ip_target: ''
miimon: 500
updelay: 1000
slaves:
- veth0
- veth1
lldp:
enabled: false
mac-address: 4A:36:F7:1D:AC:81
mtu: 1500
difference
==========
--- desired
+++ current
@@ -10,12 +10,21 @@
dhcp: false
ipv6:
enabled: true
+ address: []
+ autoconf: false
+ dhcp: false
link-aggregation:
mode: balance-tlb
options:
- active_slave: veth1
+ active_slave: veth0
+ arp_interval: 0
+ arp_ip_target: ''
miimon: 500
updelay: 1000
slaves:
- veth0
- veth1
+lldp:
+ enabled: false
+mac-address: 4A:36:F7:1D:AC:81
+mtu: 1500
Confirmed as a NetworkManager bug https://bugzilla.redhat.com/show_bug.cgi?id=1856640 Revoked the devel_ack as I have to wait NetworkManager fix their problem first. Created attachment 1739242 [details]
pre-tested.log
Verified with versions:
nmstate-1.0.0-1.el8.noarch
nispor-1.0.1-2.el8.x86_64
NetworkManager-1.30.0-0.3.el8.x86_64
DISTRO=RHEL-8.4.0-20201203.n.0
Linux hp-dl380pg8-11.rhts.eng.pek2.redhat.com 4.18.0-257.el8.x86_64 #1 SMP Wed Dec 2 02:01:12 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
Verified with versions: nmstate-1.0.0-1.el8.noarch nispor-1.0.1-2.el8.x86_64 NetworkManager-1.30.0-0.4.el8.x86_64 DISTRO=RHEL-8.4.0-20201216.d.1 Linux dell-per740-01.rhts.eng.pek2.redhat.com 4.18.0-262.el8.dt3.x86_64 #1 SMP Tue Dec 15 04:28:42 EST 2020 x86_64 x86_64 x86_64 GNU/Linux Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (nmstate bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1748 |
Created attachment 1669569 [details] reproducer.log Description of problem: If updelay in set, fail to create bond with active_slave been set. It is ok if updelay is deleted, or don't designate active_slave Version-Release number of selected component (if applicable): nmstate-0.2.6-3.8.el8.noarch NetworkManager-1.22.8-4.el8.x86_64 How reproducible: Very high Steps to Reproduce: 1. Create a bond with options: active_slave, updelay(witch requires miimon been set, too) 2. 3. Actual results: Fail Expected results: No failure Additional info: Different miimon and updelay value(time) MAY cause different result