Bug 1854851 - Error - Connection adding failed, onnection.interface-name: 'enp175s0f0v0.811': interface name is longer than 15 characters
Summary: Error - Connection adding failed, onnection.interface-name: 'enp175s0f0v0.811...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: vdsm
Classification: oVirt
Component: SuperVDSM
Version: 4.40.22
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-4.4.4
: ---
Assignee: Ales Musil
QA Contact: Michael Burman
URL:
Whiteboard:
Depends On: 1856256
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-08 11:17 UTC by Pavel Zinchuk
Modified: 2020-12-30 08:20 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-12-09 08:19:37 UTC
oVirt Team: Network
Embargoed:
pm-rhel: ovirt-4.4+


Attachments (Terms of Use)
Error, when I try to assign Logical Network to the virtual interface (68.28 KB, image/png)
2020-07-08 11:17 UTC, Pavel Zinchuk
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1540991 0 unspecified CLOSED vdsm should handle attachment of a vlan networks to the new long interfaces on rhel7.5 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1859284 0 medium CLOSED [Docs] Known Issue about VLANs on long interface names 2021-02-22 00:41:40 UTC

Internal Links: 1859284

Description Pavel Zinchuk 2020-07-08 11:17:31 UTC
Created attachment 1700285 [details]
Error, when I try to assign Logical Network to the virtual interface

Description of problem:
I have PCI network card "Intel Corporation 82599ES 10-Gigabit"

Based on "Predictable Interface Names" logic, Linux resolve interfaces names from this card like enp175s0f0, enp175s0f1.

This is ok, because pci addresses for this card are: 0000.af:00.0, 0000.af:00.1
Name mapping example:
enp175s0f1                        pci 0000:af:00.1
|   |  | |                             |    |  | |
|   |  | |                 domain <- 0000   |  | |
|   |  | |                                  |  | |
en  |  | | --> ethernet                     |  | |
    |  | |                                  |  | |
  p175 | | --> prefix/bus number (175) <-- 175 | |
       | |                                     | |
       s0| --> slot/device number (0)  <--     0 |
         |                                       |
         f1 --> function number (1)    <--       1


I'am use network interfaces virtual functions in my network infrastructure.
Virtual interfaces, as result, created based on network interfaces from this Network Card, have names: enp175s0f0v0, enp175s0fv1, enp175s0f1v0, enp175s0f1v1.

When I try to add oVirt logical network (from the oVirt Engine Webadmin -> Host -> Network Interfaces -> Setup Host Networks) to these virtual interfaces the oVirt Host return general error - "Error while executing action HostSetupNetworks: Unexpected exception".
In the system logs at the same time I see next error:
NM main-loop aborted: Connection adding failed: error=nm-connection-error-quark: connection.interface-name: 'enp175s0f0v0.811': interface name is longer than 15 characters (7)

Version-Release number of selected component (if applicable):
Installed latest Release Candidate oVirt 4.4.1.8 on the oVirt Hosts

[root@prd-ovirt-host-01 kwadmin]# rpm -qa | grep vdsm
vdsm-http-4.40.22-1.el8.noarch
vdsm-api-4.40.22-1.el8.noarch
vdsm-4.40.22-1.el8.x86_64
vdsm-python-4.40.22-1.el8.noarch
vdsm-hook-fcoe-4.40.22-1.el8.noarch
vdsm-hook-openstacknet-4.40.22-1.el8.noarch
vdsm-common-4.40.22-1.el8.noarch
vdsm-network-4.40.22-1.el8.x86_64
vdsm-jsonrpc-4.40.22-1.el8.noarch
vdsm-hook-ethtool-options-4.40.22-1.el8.noarch
vdsm-yajsonrpc-4.40.22-1.el8.noarch
vdsm-client-4.40.22-1.el8.noarch
vdsm-hook-vhostmd-4.40.22-1.el8.noarch
vdsm-hook-vmfex-dev-4.40.22-1.el8.noarch

[root@prd-ovirt-host-01 kwadmin]# rpm -qa | grep ovirt
ovirt-openvswitch-ovn-2.11-0.2020060501.el8.noarch
ovirt-ansible-engine-setup-1.2.4-1.el8.noarch
ovirt-host-4.4.1-4.el8.x86_64
ovirt-imageio-client-2.0.9-1.el8.x86_64
ovirt-vmconsole-host-1.0.8-1.el8.noarch
python3-ovirt-engine-sdk4-4.4.4-1.el8.x86_64
ovirt-hosted-engine-setup-2.4.5-1.el8.noarch
ovirt-openvswitch-ovn-common-2.11-0.2020060501.el8.noarch
ovirt-openvswitch-ovn-host-2.11-0.2020060501.el8.noarch
cockpit-ovirt-dashboard-0.14.9-1.el8.noarch
ovirt-hosted-engine-ha-2.4.4-1.el8.noarch
python3-ovirt-setup-lib-1.3.2-1.el8.noarch
ovirt-python-openvswitch-2.11-0.2020060501.el8.noarch
ovirt-imageio-daemon-2.0.9-1.el8.x86_64
ovirt-vmconsole-1.0.8-1.el8.noarch
ovirt-host-dependencies-4.4.1-4.el8.x86_64
ovirt-ansible-hosted-engine-setup-1.1.5-1.el8.noarch
ovirt-openvswitch-2.11-0.2020060501.el8.noarch
ovirt-imageio-common-2.0.9-1.el8.x86_64
ovirt-provider-ovn-driver-1.2.30-1.el8.noarch
ovirt-release44-pre-4.4.1-0.8.rc6.el8.noarch
ovirt-release44-4.4.1-0.8.rc6.el8.noarch



How reproducible:

To reproduce this issue you should have network interface with long name, like enp175s0f1
Also, should be enabled at least one VFs for this network interface.

Steps to Reproduce:
1. Open oVirt Engine Webadmin -> Host -> Your host -> Network Interfaces -> Setup Host Networks
2. Click "Show virtual functions"
3. Assign any oVirt Logical Network to the virtual interface, like enp175s0f1v0

Actual results:

Error: NM main-loop aborted: Connection adding failed: error=nm-connection-error-quark: connection.interface-name: 'enp175s0f0v0.811': interface name is longer than 15 characters (7)


Expected results:

oVirt Logical network should be assigned to the virtual interface without error.


Additional info:

Providing errors from system logs.

/var/log/messages:
Jul  8 10:05:43 prd-ovirt-host-01 vdsm[7993]: WARN unhandled close event
Jul  8 10:05:44 prd-ovirt-host-01 systemd[1]: Stopping Link Layer Discovery Protocol Agent Daemon....
Jul  8 10:05:45 prd-ovirt-host-01 lldpad[23228]: Signal 15 received - terminating
Jul  8 10:05:45 prd-ovirt-host-01 systemd[1]: Stopped Link Layer Discovery Protocol Agent Daemon..
Jul  8 10:05:45 prd-ovirt-host-01 systemd[1]: Started Link Layer Discovery Protocol Agent Daemon..
Jul  8 10:05:45 prd-ovirt-host-01 systemd[1]: Stopping Open-FCoE Inititator....
Jul  8 10:05:45 prd-ovirt-host-01 systemd[1]: Stopped Open-FCoE Inititator..
Jul  8 10:05:45 prd-ovirt-host-01 systemd[1]: Starting Open-FCoE Inititator....
Jul  8 10:05:45 prd-ovirt-host-01 systemd[1]: Started Open-FCoE Inititator..
Jul  8 10:05:45 prd-ovirt-host-01 NetworkManager[1863]: <info>  [1594202745.5283] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/7" pid=4564 uid=0 result="success"
Jul  8 10:05:45 prd-ovirt-host-01 NetworkManager[1863]: <info>  [1594202745.5389] checkpoint[0x560fffc46dd0]: rollback of /org/freedesktop/NetworkManager/Checkpoint/7
Jul  8 10:05:45 prd-ovirt-host-01 NetworkManager[1863]: <info>  [1594202745.5420] audit: op="checkpoint-rollback" arg="/org/freedesktop/NetworkManager/Checkpoint/7" pid=4564 uid=0 result="success"
Jul  8 10:05:50 prd-ovirt-host-01 vdsm[7993]: ERROR Internal server error#012Traceback (most recent call last):#012  File "/usr/lib/python3.6/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012    res = method(**params)#012  File "/usr/lib/python3.6/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod#012    result = fn(*methodArgs)#012  File "<decorator-gen-480>", line 2, in setupNetworks#012  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method#012    ret = func(*args, **kwargs)#012  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 1548, in setupNetworks#012    supervdsm.getProxy().setupNetworks(networks, bondings, options)#012  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012    return callMethod()#012  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012    **kwargs)#012  File "<string>", line 2, in setupNetworks#012  File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod#012    raise convert_to_error(kind, result)#012libnmstate.error.NmstateLibnmError: Unexpected failure of libnm when running the mainloop: run execution
Jul  8 10:05:53 prd-ovirt-host-01 vdsm[7993]: WARN unhandled write event


/var/log/vdsm/vdsm.log:
2020-07-08 10:06:44,237+0000 INFO  (jsonrpc/6) [api.network] FINISH setupNetworks error=Unexpected failure of libnm when running the mainloop: run execution from=::ffff:10.60.98.51,36130, flow_id=e0540fe8-3c63-4f09-8a30-bd6bc263167d (api:52)
2020-07-08 10:06:44,237+0000 ERROR (jsonrpc/6) [jsonrpc.JsonRpcServer] Internal server error (__init__:350)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request
    res = method(**params)
  File "/usr/lib/python3.6/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod
    result = fn(*methodArgs)
  File "<decorator-gen-480>", line 2, in setupNetworks
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 1548, in setupNetworks
    supervdsm.getProxy().setupNetworks(networks, bondings, options)
  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
    return callMethod()
  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
    **kwargs)
  File "<string>", line 2, in setupNetworks
  File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
    raise convert_to_error(kind, result)
libnmstate.error.NmstateLibnmError: Unexpected failure of libnm when running the mainloop: run execution
2020-07-08 10:06:44,238+0000 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.setupNetworks failed (error -32603) in 6.00 seconds (__init__:312)
2020-07-08 10:06:44,335+0000 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.confirmConnectivity succeeded in 0.00 seconds (__init__:312)
2020-07-08 10:06:44,777+0000 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)


/var/log/vdsm/supervdsm.log:
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,144::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call setupNetworks with ({'onebc10d2908654': {'vlan': '811', 'ipv6autoconf': False, 'nic': 'enp175s0f0v0', 'bridged': 'false', 'dhcpv6': False, 'mtu': 9000, 'switch': 'legacy'}}, {}, {'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'}) {}
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,144::api::220::root::(setupNetworks) Setting up network according to configuration: networks:{'onebc10d2908654': {'vlan': '811', 'ipv6autoconf': False, 'nic': 'enp175s0f0v0', 'bridged': 'false', 'dhcpv6': False, 'mtu': 9000, 'switch': 'legacy'}}, bondings:{}, options:{'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'}
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,173::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,182::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,186::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev eno3v0 classid 0:32b (cwd None)
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,192::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,192::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev eno4v0 classid 0:32b (cwd None)
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,196::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 805, 'bonding': 'bond0', 'defaultRoute': True, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.8.101', 'netmask': '255.255.255.0', 'gateway': '10.10.8.254', 'switch': 'legacy', 'nameservers': ['127.0.0.1', '8.8.8.8', '8.8.4.4']})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network old_data({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 80, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network localdata({'bridged': False, 'mtu': 9000, 'vlan': 806, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.9.101', 'netmask': '255.255.255.0', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network prod_mgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 801, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.64.101', 'netmask': '255.255.248.0', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network prod_mon({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 802, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.72.101', 'netmask': '255.255.248.0', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network old_mon({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 30, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network old_mgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 20, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network old_sip({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 60, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network prod_public({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 10, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network prod_sip({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 804, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network prod_data({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 803, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network vmwmgmt({'bridged': False, 'mtu': 1500, 'vlan': 40, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network onc5d63a790fef4({'bridged': False, 'mtu': 9000, 'vlan': 810, 'bonding': 'bond1', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network ondb0a9571c03a4({'bridged': False, 'mtu': 9000, 'vlan': 811, 'nic': 'eno3v0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.96.67', 'netmask': '255.255.255.192', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,377::netconfpersistence::58::root::(setNetwork) Adding network on285a0fef5c764({'bridged': False, 'mtu': 9000, 'vlan': 811, 'nic': 'eno4v0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.96.68', 'netmask': '255.255.255.192', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,379::netconfpersistence::69::root::(setBonding) Adding bond1({'nics': ['eno3', 'eno4'], 'options': 'mode=6', 'switch': 'legacy'})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,379::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'eno2', 'enp59s0f0', 'enp59s0f1'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '0c:c4:7a:2a:70:8c'})
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,379::netconfpersistence::58::root::(setNetwork) Adding network onebc10d2908654({'vlan': 811, 'ipv6autoconf': False, 'nic': 'enp175s0f0v0', 'bridged': False, 'dhcpv6': False, 'mtu': 9000, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []})
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:57,382::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-39 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd None)
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,929::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b''
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:57,930::configurator::195::root::(_setup_nmstate) Processing setup through nmstate
MainProcess|jsonrpc/3::INFO::2020-07-08 10:08:58,034::configurator::197::root::(_setup_nmstate) Desired state: {'interfaces': [{'name': 'bond0', 'mtu': 9000}, {'name': 'bond1', 'mtu': 9000}, {'name': 'eno3v0', 'mtu': 9000}, {'name': 'eno4v0', 'mtu': 9000}, {'name': 'enp175s0f0v0', 'state': 'up', 'mtu': 9000, 'ipv4': {'enabled': False}, 'ipv6': {'enabled': False}}, {'vlan': {'id': 811, 'base-iface': 'enp175s0f0v0'}, 'name': 'enp175s0f0v0.811', 'type': 'vlan', 'state': 'up', 'mtu': 9000, 'ipv4': {'enabled': False}, 'ipv6': {'enabled': False}}, {'name': 'ovirtmgmt'}]}
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:58,225::checkpoint::121::root::(create) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/9 created for all devices: 60
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:58,225::netapplier::239::root::(_add_interfaces) Adding new interfaces: ['enp175s0f0v0.811']
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:58,229::netapplier::251::root::(_edit_interfaces) Editing interfaces: ['eno4v0', 'enp175s0f0v0', 'ovirtmgmt', 'bond1', 'bond0', 'eno3v0']
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:58,233::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async
MainProcess|jsonrpc/3::ERROR::2020-07-08 10:08:58,235::nmclient::200::root::(quit) NM main-loop aborted: Connection adding failed: error=nm-connection-error-quark: connection.interface-name: 'enp175s0f0v0.811': interface name is longer than 15 characters (7)
MainProcess|jsonrpc/3::DEBUG::2020-07-08 10:08:58,239::checkpoint::164::root::(rollback) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/9 rollback executed: dbus.Dictionary({dbus.String('/org/freedesktop/NetworkManager/Devices/27'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/41'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/15'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/7'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/43'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/35'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/2'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/10'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/32'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/22'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/14'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/42'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/13'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/19'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/38'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/33'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/1'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/29'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/21'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/11'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/23'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/20'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/31'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/44'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/34'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/24'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/6'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/45'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/16'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/4'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/9'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/12'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/30'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/26'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/18'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/17'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/3'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/25'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/40'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/5'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/28'): dbus.UInt32(0)}, signature=dbus.Signature('su'))
MainProcess|jsonrpc/3::ERROR::2020-07-08 10:09:03,245::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 95, in wrapper
    res = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 241, in setupNetworks
    _setup_networks(networks, bondings, options, net_info)
  File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 266, in _setup_networks
    networks, bondings, options, net_info, in_rollback
  File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 154, in setup
    _setup_nmstate(networks, bondings, options, in_rollback)
  File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 199, in _setup_nmstate
    nmstate.setup(desired_state, verify_change=not in_rollback)
  File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py", line 63, in setup
    state_apply(desired_state, verify_change=verify_change)
  File "/usr/lib/python3.6/site-packages/libnmstate/deprecation.py", line 40, in wrapper
    return func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/libnmstate/nm/nmclient.py", line 96, in wrapped
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 73, in apply
    state.State(desired_state), verify_change, commit, rollback_timeout
  File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 163, in _apply_ifaces_state
    con_profiles=ifaces_add_configs + ifaces_edit_configs,
  File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
    next(self.gen)
  File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 232, in _setup_providers
    mainloop.run(timeout=MAINLOOP_TIMEOUT)
  File "/usr/lib/python3.6/site-packages/libnmstate/nm/nmclient.py", line 177, in run
    f"Unexpected failure of libnm when running the mainloop: {err}"
libnmstate.error.NmstateLibnmError: Unexpected failure of libnm when running the mainloop: run execution



I see two possible solutions here:
1. oVirt Engine should generate smaller name length when customer activate VFs from the "Setup Host Networks"
2. If it is possible to increase interface name lenth limit it will solve issue too.


oVirt Team, can you please help to solve this issue. It urgent for me, because I can't upgrade my ovirt environment from 4.3.9 to the 4.4
oVirt 4.3.9 don't have this issue, therefore it is functionality degradation.

Thank you for help.

Comment 1 Dominik Holler 2020-07-08 13:15:14 UTC
Thanks for reporting the bug.

Does the workaround from the doc text of bug 1478865 helps in your scenario?

Comment 2 Pavel Zinchuk 2020-07-08 15:01:43 UTC
Hi Dominik,

Thank you for the link, but the workaround from bug1478865 is not acceptable.

Based on the RedHat support answer (https://access.redhat.com/solutions/2435891), it is not safe to use old network device naming.
Red Hat strongly recommends that the new RHEL7 and RHEL8 naming conventions are used.

I think that customers could get unexpected behavior for oVirt hosts in the further oVirt updates if predictable Network Interface feature will be disabled.
Moreover, this functionality on oVirt 4.3.9 is working properly. This suggests that there was a mistake in the code during update oVirt 4.4.

Comment 3 Ales Musil 2020-07-09 05:44:41 UTC
Hi Pavel,

what about the second proposed workaround. Adding udev rules for the interfaces that are causing the troubles? 

Something like:

ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:50:56:8e:12:34", NAME="eth123"

Should do the trick. 

Is this possible workaround until we have proper solution in place?

Comment 4 Pavel Zinchuk 2020-07-09 06:46:30 UTC
Hi Ales,

The second proposed workaround will not wrk too.
I can rename parent interface name with udev rule (link it via MAC address). For example i can rename enp175s0f0 to the enp175s0
But virtual interface, that will be created from this parent interface, still will have default name enp175s0f0v0.

At the same time, i can't rename virtual interface name because I can't prognoses MAC address that will be provided to the virtual interface.

Comment 5 Ales Musil 2020-07-09 07:26:52 UTC
Hi Pavel,

Since the PCI address is stable for SR-IOV PF and thus VFs have also predictable PCI addresses that are inherited from PF.
It depends on the driver but you should be able to find the pattern of PCI assignment to VFs and then you can use rule like this:

ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:03:00.0", NAME:="eth123"

Hopefully this helps.

Comment 6 Dominik Holler 2020-07-09 07:43:57 UTC
> oVirt Team, can you please help to solve this issue. It urgent for me, because I can't upgrade my ovirt environment from 4.3.9 to the 4.4
> oVirt 4.3.9 don't have this issue, therefore it is functionality degradation.

Was the interface naming in 4.3.9 different for you?

Comment 7 Pavel Zinchuk 2020-07-09 10:18:02 UTC
Hi Dominik

Checked oVirt 4.3.9
Yes, oVirt 4.3.9 does not have v0/v1 prefixes at the end of virtual interface names.
Also, eno virtual interfaces do not have v0/v1 and some eno virtual interfaces do not have f0v0/f0v1 prefixes.

My oVirt 4.3.9 has next interfaces. This is for example.
Physical interfaces:
 - eno1
 - eno2
 - eno3
 - eno4
 - enp59s0f0
 - enp59s0f0
 - enp175s0f0
 - enp175s0f1

Virtual interfaces:
 - enp24s10    (parent: eno3)
 - enp24s10f1  (parent: eno3)
 - enp24s14    (parent: eno4)
 - enp24s14f1  (parent: eno4)
 - enp175s16f1 (parent: enp175s0f1)

Comment 8 Pavel Zinchuk 2020-07-09 13:23:30 UTC
Hi Ales,

I have tested udev rule with linking physical network interfaces to the PCI slots.

Added this configuration:
# cat /etc/udev/rules.d/60-persistent-net.rules
ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:00.0", NAME:="enp175s0"
ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:00.1", NAME:="enp175s1"

After reboot got correct network names:
# ip a | grep enp175s
6: enp175s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
7: enp175s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

But when I enabled VFs for interface enp175s0 from oVirt "Setup Host Network" I got the new virtual interface with long name: enp175s0f0v0


Unfortunately, this workaround don't work

Comment 9 Ales Musil 2020-07-10 05:30:07 UTC
Hi Pavel, 

there has been slight misunderstanding. The rules that you did handle only the physical function (PF). 
If you enable VFs via sysctl/webadmin the VF itself gets assigned different PCI address. For example
if your PF has PCI address 0000:af:00.0 the VFs can get something like 0000:af:01.0, 0000:af:02.0, 0000:af:03.0 etc.
The assignment pattern depends on your network card driver but should be deterministic. You can find this by
looking at lspci -D with some of the VFs enabled. 

For the names that you have mentioned as reply to Dominik, we comply with the rules that set by systemd. 
That being said there is probably some change between el7 and el8 in systemd which caused this. 
The change that seems to fit to what you have described was done in systemd version v239 [0][1].







[0] "v239

    Naming was changed for virtual network interfaces created with SR-IOV and NPAR and for devices where the PCI network controller device does not have a slot number associated.
    SR-IOV virtual devices are named based on the name of the parent interface, with a suffix of "vport", where port is the virtual device number. Previously those virtual devices were named as if completely independent.
    The ninth and later NPAR virtual devices are named following the scheme used for the first eight NPAR partitions. Previously those devices were not renamed and the kernel default ("ethN") was used.
    Names are also generated for PCI devices where the PCI network controller device does not have an associated slot number itself, but one of its parents does. Previously those devices were not renamed and the kernel default was used."

[1] https://github.com/systemd/systemd/blob/master/NEWS#L2502

Comment 10 Pavel Zinchuk 2020-07-12 14:31:11 UTC
Hi Ales,

Thank you for clarification.
I finally managed to apply your's workaround partially. But this workaround is accompanied by a lot of nuances and problems. And can't be fully used due to random error during assign Logical Networks to the network interfaces.


First of all, I want to describe steps, that allow me to get virtual interface with less name length:
1. First of all, need to get PCI address of virtual interfaces, that I will have. To get PCI addresses, I've activated virtual functions. As result, was configured next udev rules:
    # cat /etc/udev/rules.d/60-persistent-net.rules
    ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:00.0", NAME:="enp175s0"
    ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:00.1", NAME:="enp175s1"
    ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:10.0", NAME:="enp175s0v0" #virtual interface, parent: enp175s0
    ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:10.2", NAME:="enp175s0v1" #virtual interface, parent: enp175s0
    ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:10.1", NAME:="enp175s1v0" #virtual interface, parent: enp175s1
    ACTION=="add", SUBSYSTEM=="net", KERNELS=="0000:af:10.3", NAME:="enp175s1v1" #virtual interface, parent: enp175s1

2. Then, was renamed ifcfg file names and inner device names from enp175s0f0 to enp175s0, and from enp175s0f1 to enp175s1:
    # mv /etc/sysconfig/network-scripts/ifcfg-enp175s0f0 /etc/sysconfig/network-scripts/ifcfg-enp175s0
    # mv /etc/sysconfig/network-scripts/ifcfg-enp175s0f1 /etc/sysconfig/network-scripts/ifcfg-enp175s1

3. As result, will be changed interfaces. We remember that for a configured oVirt host in the cluster, this will cause issues. Therefore, I have removed record about old interfaces:
    # mv /var/lib/vdsm/persistence/netconf/devices/enp175s0f0 /var/lib/vdsm/persistence/netconf/devices/enp175s0
    # mv /var/lib/vdsm/persistence/netconf/devices/enp175s0f1 /var/lib/vdsm/persistence/netconf/devices/enp175s1
    # mv /var/lib/vdsm/staging/netconf.6qiaKssQ/devices/enp175s0f0 /var/lib/vdsm/staging//netconf.6qiaKssQ/devices/enp175s0
    # mv /var/lib/vdsm/staging/netconf.6qiaKssQ/devices/enp175s0f1 /var/lib/vdsm/staging//netconf.6qiaKssQ/devices/enp175s1

Removed infromation about interfaces enp175s0f0 and enp175s0f1 from file /var/lib/lldpad/lldpad.conf

Disabled virtual interfaces:
    # echo 0 > /sys/class/net/enp175s0f0/device/sriov_numvfs
    # echo 0 > /sys/class/net/enp175s0f1/device/sriov_numvfs

Removed information about old iSCSI sessions:
    # rm -rf /var/lib/iscsi/nodes/*

By the way, if you know a simpler and more correct way to change the network interfaces for a configured oVirt host, please tell about it, I did not find mention of this in the oVirt documentation.

4. Reboot oVirt Host.

5. After reboot checking that interfaces renamed correctly:
    # ip a | grep enp175
    5: enp175s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    6: enp175s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

6. After reboot and successfull activation oVirt host on the oVirt Engine, enable VFs (virtual functions) for new interfaces: enp175s0, enp175s1.
As result we will have virtual interfaces like these:
    # ip a | grep enp175
    5: enp175s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    6: enp175s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    62: enp175s0v0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    63: enp175s0v1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    64: enp175s1v0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    65: enp175s1v1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000


All this time maintenance for oVirt Host was not activated. This allow VDSM sync the new networks with oVirt Engine databace.


7. After that I tried to assign Logical Network with Vlan id 811 to the virtual interface enp175s1v1. But got again error ""Error while executing action HostSetupNetworks: Unexpected exception"."
In the log files i found next.
/var/log/vdsm/supervdsm.log:
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,196::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call setupNetworks with ({'on86a9bbfbc27a4': {'vlan': '811', 'ipv6autoconf': False, 'nic': 'enp175s1v0', 'bridged': 'false', 'dhcpv6': False, 'mtu': 9000, 'switch': 'legacy'}}, {}, {'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'}) {}
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,196::api::220::root::(setupNetworks) Setting up network according to configuration: networks:{'on86a9bbfbc27a4': {'vlan': '811', 'ipv6autoconf': False, 'nic': 'enp175s1v0', 'bridged': 'false', 'dhcpv6': False, 'mtu': 9000, 'switch': 'legacy'}}, bondings:{}, options:{'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'}
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,239::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,249::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,254::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev eno3v0 classid 0:32b (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,260::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,261::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev eno4v0 classid 0:32b (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,267::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,267::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp175s0v1 classid 0:32b (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,273::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,273::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp175s1v1 classid 0:32b (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,279::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:36,373::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:36,373::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-39 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:36,382::commands::98::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:36,382::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'3600140564afd83aeaa34db4a1b8d93ec: 0 5322571776 multipath 2 0 1 0 12 1 A 0 1 2 8:32 A 0 0 1 E 0 1 2 8:64 A 0 0 1 E 0 1 2 8:96 A 0 0 1 E 0 1 2 8:128 A 0 0 1 E 0 1 2 8:160 A 0 0 1 E 0 1 2 8:192 A 0 0 1 E 0 1 2 8:224 A 0 0 1 E 0 1 2 65:0 A 0 0 1 E 0 1 2 65:32 A 0 0 1 E 0 1 2 65:64 A 0 0 1 E 0 1 2 65:96 A 0 0 1 E 0 1 2 65:128 A 0 0 1 \n36001405b22c81398e04495480b07dda4: 0 8870952960 multipath 2 0 1 0 12 1 A 0 1 2 8:16 A 0 0 1 E 0 1 2 8:48 A 0 0 1 E 0 1 2 8:80 A 0 0 1 E 0 1 2 8:112 A 0 0 1 E 0 1 2 8:144 A 0 0 1 E 0 1 2 8:176 A 0 0 1 E 0 1 2 8:208 A 0 0 1 E 0 1 2 8:240 A 0 0 1 E 0 1 2 65:16 A 0 0 1 E 0 1 2 65:48 A 0 0 1 E 0 1 2 65:80 A 0 0 1 E 0 1 2 65:112 A 0 0 1 \n'
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,474::vsctl::74::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,475::cmdutils::130::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,484::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network prod_data({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 803, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network onc5d63a790fef4({'bridged': False, 'mtu': 9000, 'vlan': 810, 'bonding': 'bond1', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.96.11', 'netmask': '255.255.255.224', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network vmwmgmt({'bridged': False, 'mtu': 1500, 'vlan': 40, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network prod_mon({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 802, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.72.103', 'netmask': '255.255.248.0', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network old_data({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 80, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network ondb0a9571c03a4({'bridged': False, 'mtu': 9000, 'vlan': 811, 'nic': 'eno3v0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.96.77', 'netmask': '255.255.255.192', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network old_mgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 20, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 805, 'bonding': 'bond0', 'defaultRoute': True, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.98.103', 'netmask': '255.255.255.0', 'gateway': '10.10.98.254', 'switch': 'legacy', 'nameservers': ['127.0.0.1', '8.8.8.8', '8.8.4.4']})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network old_mon({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 30, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network localdata({'bridged': False, 'mtu': 9000, 'vlan': 806, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.99.103', 'netmask': '255.255.255.0', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network old_sip({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 60, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network prod_public({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 10, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network prod_sip({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 804, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network prod_mgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'vlan': 801, 'bonding': 'bond0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.64.103', 'netmask': '255.255.248.0', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network on285a0fef5c764({'bridged': False, 'mtu': 9000, 'vlan': 811, 'nic': 'eno4v0', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.96.78', 'netmask': '255.255.255.192', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network onebc10d2908654({'bridged': False, 'mtu': 9000, 'vlan': 811, 'nic': 'enp175s0v1', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': '10.10.96.75', 'netmask': '255.255.255.192', 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,494::netconfpersistence::58::root::(setNetwork) Adding network on86a9bbfbc27a4({'bridged': False, 'mtu': 1500, 'vlan': 811, 'nic': 'enp175s1v1', 'defaultRoute': False, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'switch': 'legacy', 'nameservers': []})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,496::netconfpersistence::69::root::(setBonding) Adding bond1({'nics': ['eno3', 'eno4'], 'options': 'mode=6', 'switch': 'legacy', 'hwaddr': '0c:c4:7a:2a:6e:63'})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,496::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'eno2', 'enp59s0f0', 'enp59s0f1'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '0c:c4:7a:2a:6e:60'})
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:36,496::netconfpersistence::58::root::(setNetwork) Adding network on86a9bbfbc27a4({'vlan': 811, 'ipv6autoconf': False, 'nic': 'enp175s1v0', 'bridged': False, 'dhcpv6': False, 'mtu': 9000, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []})
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:36,499::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-39 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd None)
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:37,006::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b''
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:37,007::configurator::195::root::(_setup_nmstate) Processing setup through nmstate
MainProcess|jsonrpc/0::INFO::2020-07-12 11:07:37,106::configurator::197::root::(_setup_nmstate) Desired state: {'interfaces': [{'name': 'bond0', 'mtu': 9000}, {'name': 'bond1', 'mtu': 9000}, {'name': 'eno3v0', 'mtu': 9000}, {'name': 'eno4v0', 'mtu': 9000}, {'name': 'enp175s0v1', 'mtu': 9000}, {'name': 'enp175s1v0', 'state': 'up', 'mtu': 9000, 'ipv4': {'enabled': False}, 'ipv6': {'enabled': False}}, {'vlan': {'id': 811, 'base-iface': 'enp175s1v0'}, 'name': 'enp175s1v0.811', 'type': 'vlan', 'state': 'up', 'mtu': 9000, 'ipv4': {'enabled': False}, 'ipv6': {'enabled': False}}, {'name': 'enp175s1v1.811', 'state': 'absent'}, {'name': 'ovirtmgmt'}]}
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,305::checkpoint::121::root::(create) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/22 created for all devices: 60
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,306::netapplier::239::root::(_add_interfaces) Adding new interfaces: ['enp175s1v0.811']
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,309::netapplier::251::root::(_edit_interfaces) Editing interfaces: ['bond1', 'eno4v0', 'enp175s0v1', 'eno3v0', 'bond0', 'enp175s1v1.811', 'enp175s1v0', 'ovirtmgmt']
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,313::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,337::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=enp175s1v0.811
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,338::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,349::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=bond0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,349::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,357::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=bond1
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,357::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,360::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=eno3v0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,360::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,362::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=eno4v0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,362::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,365::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=enp175s0v1
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,365::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,369::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=enp175s1v0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,369::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,376::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=ovirtmgmt
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,376::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,384::device::149::root::(_modify_callback) Device reapply succeeded: dev=bond1
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,384::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,386::device::149::root::(_modify_callback) Device reapply succeeded: dev=bond0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,386::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,391::device::149::root::(_modify_callback) Device reapply succeeded: dev=ovirtmgmt
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,391::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,392::device::143::root::(_modify_callback) Device reapply failed on enp175s1v0: error=nm-device-error-quark: Can't reapply changes to 'connection.autoconnect-slaves' setting (3)
Fallback to device activation
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,395::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=enp175s1v0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,443::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=enp175s1v0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags>
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,444::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,446::device::149::root::(_modify_callback) Device reapply succeeded: dev=eno4v0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,446::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,447::device::149::root::(_modify_callback) Device reapply succeeded: dev=enp175s0v1
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,448::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,449::device::149::root::(_modify_callback) Device reapply succeeded: dev=eno3v0
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,449::nmclient::136::root::(execute_next_action) Executing NM action: func=safe_activate_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,451::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=enp175s1v0.811, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,495::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=enp175s1v0.811, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags>
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,495::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_deactivate_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,498::active_connection::128::root::(_deactivate_connection_callback) Connection deactivation succeeded on enp175s1v1.811
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,504::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_delete_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,540::connection::355::root::(_delete_connection_callback) Connection deletion succeeded: dev=enp175s1v1.811
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,540::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_delete_device_async
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,541::device::196::root::(_delete_device_callback) Interface is not real anymore: iface=enp175s1v1.811
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,541::device::199::root::(_delete_device_callback) Ignored error: g-dbus-error-quark: No such interface 'org.freedesktop.NetworkManager.Device' on object at path /org/freedesktop/NetworkManager/Devices/77 (19)
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:37,541::nmclient::139::root::(execute_next_action) NM action queue exhausted, quiting mainloop
MainProcess|jsonrpc/0::DEBUG::2020-07-12 11:07:43,222::checkpoint::164::root::(rollback) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/22 rollback executed: dbus.Dictionary({dbus.String('/org/freedesktop/NetworkManager/Devices/77'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/3'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/13'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/54'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/31'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/21'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/23'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/18'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/12'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/7'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/27'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/32'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/10'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/42'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/20'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/16'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/36'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/64'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/17'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/5'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/15'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/29'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/19'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/9'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/14'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/4'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/22'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/6'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/24'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/30'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/56'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/33'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/11'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/1'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/2'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/25'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/26'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/35'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/34'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/28'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/43'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/55'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/44'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/8'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/40'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/45'): dbus.UInt32(0), dbus.String('/org/freedesktop/NetworkManager/Devices/63'): dbus.UInt32(0)}, signature=dbus.Signature('su'))
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:46,383::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:46,383::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-39 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:46,395::commands::98::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0
MainProcess|mpathhealth::DEBUG::2020-07-12 11:07:46,395::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'3600140564afd83aeaa34db4a1b8d93ec: 0 5322571776 multipath 2 0 1 0 12 1 A 0 1 2 8:32 A 0 0 1 E 0 1 2 8:64 A 0 0 1 E 0 1 2 8:96 A 0 0 1 E 0 1 2 8:128 A 0 0 1 E 0 1 2 8:160 A 0 0 1 E 0 1 2 8:192 A 0 0 1 E 0 1 2 8:224 A 0 0 1 E 0 1 2 65:0 A 0 0 1 E 0 1 2 65:32 A 0 0 1 E 0 1 2 65:64 A 0 0 1 E 0 1 2 65:96 A 0 0 1 E 0 1 2 65:128 A 0 0 1 \n36001405b22c81398e04495480b07dda4: 0 8870952960 multipath 2 0 1 0 12 1 A 0 1 2 8:16 A 0 0 1 E 0 1 2 8:48 A 0 0 1 E 0 1 2 8:80 A 0 0 1 E 0 1 2 8:112 A 0 0 1 E 0 1 2 8:144 A 0 0 1 E 0 1 2 8:176 A 0 0 1 E 0 1 2 8:208 A 0 0 1 E 0 1 2 8:240 A 0 0 1 E 0 1 2 65:16 A 0 0 1 E 0 1 2 65:48 A 0 0 1 E 0 1 2 65:80 A 0 0 1 E 0 1 2 65:112 A 0 0 1 \n'
MainProcess|jsonrpc/0::ERROR::2020-07-12 11:07:48,228::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 95, in wrapper
    res = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 241, in setupNetworks
    _setup_networks(networks, bondings, options, net_info)
  File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 266, in _setup_networks
    networks, bondings, options, net_info, in_rollback
  File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 154, in setup
    _setup_nmstate(networks, bondings, options, in_rollback)
  File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 199, in _setup_nmstate
    nmstate.setup(desired_state, verify_change=not in_rollback)
  File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py", line 63, in setup
    state_apply(desired_state, verify_change=verify_change)
  File "/usr/lib/python3.6/site-packages/libnmstate/deprecation.py", line 40, in wrapper
    return func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/libnmstate/nm/nmclient.py", line 96, in wrapped
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 73, in apply
    state.State(desired_state), verify_change, commit, rollback_timeout
  File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 175, in _apply_ifaces_state
    _verify_change(desired_state)
  File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 221, in _verify_change
    verifiable_desired_state.verify_interfaces(current_state)
  File "/usr/lib/python3.6/site-packages/libnmstate/state.py", line 330, in verify_interfaces
    self._assert_interfaces_equal(other_state)
  File "/usr/lib/python3.6/site-packages/libnmstate/state.py", line 759, in _assert_interfaces_equal
    current_state.interfaces[ifname],
libnmstate.error.NmstateVerificationError:
desired
=======
---
name: enp175s1v0
type: ethernet
state: up
ipv4:
  enabled: false
ipv6:
  enabled: false
mac-address: 02:00:00:00:00:01
mtu: 9000

current
=======
---
name: enp175s1v0
type: ethernet
state: up
ethernet:
  auto-negotiation: false
  duplex: full
  speed: 10000
ipv4:
  enabled: false
ipv6:
  enabled: false
mac-address: 02:00:00:00:00:01
mtu: 1500

difference
==========
--- desired
+++ current
@@ -2,9 +2,13 @@
 name: enp175s1v0
 type: ethernet
 state: up
+ethernet:
+  auto-negotiation: false
+  duplex: full
+  speed: 10000
 ipv4:
   enabled: false
 ipv6:
   enabled: false
 mac-address: 02:00:00:00:00:01
-mtu: 9000
+mtu: 1500


/var/log/messages:
Jul 12 11:10:45 prd-ovirt-host-03 systemd[1]: Stopping Link Layer Discovery Protocol Agent Daemon....
Jul 12 11:10:45 prd-ovirt-host-03 lldpad[27165]: Signal 15 received - terminating
Jul 12 11:10:45 prd-ovirt-host-03 systemd[1]: Stopped Link Layer Discovery Protocol Agent Daemon..
Jul 12 11:10:45 prd-ovirt-host-03 systemd[1]: Started Link Layer Discovery Protocol Agent Daemon..
Jul 12 11:10:45 prd-ovirt-host-03 systemd[1]: Stopping Open-FCoE Inititator....
Jul 12 11:10:45 prd-ovirt-host-03 systemd[1]: Stopped Open-FCoE Inititator..
Jul 12 11:10:45 prd-ovirt-host-03 systemd[1]: Starting Open-FCoE Inititator....
Jul 12 11:10:45 prd-ovirt-host-03 systemd[1]: Started Open-FCoE Inititator..
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0140] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/24" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0250] manager: (enp175s1v0.811): new VLAN device (/org/freedesktop/NetworkManager/Devices/82)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0318] device (enp175s1v0.811): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Jul 12 11:10:46 prd-ovirt-host-03 kernel: IPv6: ADDRCONF(NETDEV_UP): enp175s1v0.811: link is not ready
Jul 12 11:10:46 prd-ovirt-host-03 systemd-udevd[28000]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0463] device (enp175s1v0.811): carrier: link connected
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0494] audit: op="connection-add" uuid="2e7daf04-2a53-475b-8031-6a0c1f01b199" name="enp175s1v0.811" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0502] device (enp175s1v0.811): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0601] policy: auto-activating connection 'enp175s1v0.811' (2e7daf04-2a53-475b-8031-6a0c1f01b199)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0623] audit: op="connection-update" uuid="947e0a7a-ade8-4156-9674-67e1d41db072" name="Bond connection bond0" args="ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-iaid,ipv6.dhcp-duid" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0632] device (enp175s1v0.811): Activation: starting connection 'enp175s1v0.811' (2e7daf04-2a53-475b-8031-6a0c1f01b199)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0638] device (enp175s1v0.811): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0644] device (enp175s1v0.811): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0705] audit: op="connection-update" uuid="88d6a947-0c71-48bb-92d8-f05f59b99fa4" name="bond1" args="ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-iaid,ipv6.dhcp-duid" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0733] audit: op="connection-update" uuid="e2931ea2-1b17-4d65-941c-12b627d8b833" name="eno3v0" args="ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-iaid,ipv6.dhcp-duid" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0760] audit: op="connection-update" uuid="94981a21-78af-4a9a-aedd-7303cf36a121" name="eno4v0" args="ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-iaid,ipv6.dhcp-duid" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 lldpad[27984]: recvfrom(Event interface): No buffer space available
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0787] audit: op="connection-update" uuid="425a5c68-2820-4f8c-93d2-5d985d98ad26" name="enp175s0v1" args="ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-iaid,ipv6.dhcp-duid" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0809] audit: op="connection-add" uuid="4fa608be-1ebd-45bc-afab-053624beb87f" name="enp175s1v0" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0815] device (enp175s1v0.811): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0859] policy: auto-activating connection 'enp175s1v0' (4fa608be-1ebd-45bc-afab-053624beb87f)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0862] device (enp175s1v0.811): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 dbus-daemon[1746]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.14' (uid=0 pid=1871 comm="/usr/sbin/NetworkManager --no-daemon " label="system_u:system_r:NetworkManager_t:s0")
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0892] audit: op="connection-update" uuid="80857d30-da1d-4568-a5e9-5e2649efe4f1" name="ovirtmgmt" args="ipv6.addr-gen-mode,ipv6.dhcp-iaid,ipv6.dhcp-duid" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0898] device (enp175s1v0): Activation: starting connection 'enp175s1v0' (4fa608be-1ebd-45bc-afab-053624beb87f)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0899] device (enp175s1v0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0903] device (enp175s1v0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0963] audit: op="device-reapply" interface="bond1" ifindex=20 pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.0984] audit: op="device-reapply" interface="bond0" ifindex=23 pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1004] audit: op="device-reapply" interface="ovirtmgmt" ifindex=19 pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 dbus-daemon[1746]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Started Network Manager Script Dispatcher Service.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1014] device (enp175s1v0.811): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1020] device (enp175s1v0.811): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1032] device (enp175s1v0.811): Activation: successful, device activated.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1039] audit: op="device-reapply" interface="eno3v0" ifindex=60 pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1056] audit: op="device-reapply" interface="eno4v0" ifindex=58 pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1071] audit: op="device-reapply" interface="enp175s0v1" ifindex=72 pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1086] audit: op="device-reapply" interface="enp175s1v0" ifindex=80 args="connection.autoconnect-slaves,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-iaid,ipv6.dhcp-duid" pid=3742 uid=0 result="fail" reason="Can't reapply changes to 'connection.autoconnect-slaves' setting"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1094] audit: op="device-managed" arg="managed" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1101] device (enp175s1v0): state change: config -> deactivating (reason 'new-activation', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1109] device (enp175s1v0): disconnecting for new activation request.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1109] audit: op="connection-activate" uuid="4fa608be-1ebd-45bc-afab-053624beb87f" name="enp175s1v0" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1113] device (enp175s1v0): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Reloading Login and scanning of iSCSI devices.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1197] device (enp175s1v0): Activation: starting connection 'enp175s1v0' (4fa608be-1ebd-45bc-afab-053624beb87f)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1208] device (enp175s1v0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1213] device (enp175s1v0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 iscsiadm[28008]: iscsiadm: No records found
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Reloaded Login and scanning of iSCSI devices.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1473] device (enp175s1v0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 kernel: ixgbe 0000:af:00.1 enp175s1: VF max_frame 9018 out of range
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <warn>  [1594552246.1568] platform-linux: do-change-link[80]: failure changing link: failure 22 (Invalid argument)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <warn>  [1594552246.1569] device (enp175s1v0): mtu: failure to set IPv6 MTU
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1571] device (enp175s1v0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1583] device (enp175s1v0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1585] device (enp175s1v0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1597] device (enp175s1v0): Activation: successful, device activated.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1606] device (enp175s1v0.811): state change: activated -> deactivating (reason 'new-activation', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1616] device (enp175s1v0.811): disconnecting for new activation request.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1617] audit: op="connection-activate" uuid="2e7daf04-2a53-475b-8031-6a0c1f01b199" name="enp175s1v0.811" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1622] device (enp175s1v0.811): state change: deactivating -> disconnected (reason 'new-activation', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1684] device (enp175s1v0.811): Activation: starting connection 'enp175s1v0.811' (2e7daf04-2a53-475b-8031-6a0c1f01b199)
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1694] device (enp175s1v0.811): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1699] device (enp175s1v0.811): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1941] device (enp175s1v0.811): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1986] device (enp175s1v0.811): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.1999] device (enp175s1v0.811): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.2001] device (enp175s1v0.811): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.2014] device (enp175s1v0.811): Activation: successful, device activated.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.2021] device (enp175s1v1.811): state change: activated -> deactivating (reason 'user-requested', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.2031] audit: op="connection-deactivate" uuid="03d79f31-408c-4672-ab52-4127aaca18d9" name="enp175s1v1.811" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.2036] device (enp175s1v1.811): state change: deactivating -> disconnected (reason 'user-requested', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Reloading Login and scanning of iSCSI devices.
Jul 12 11:10:46 prd-ovirt-host-03 iscsiadm[28036]: iscsiadm: No records found
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Reloaded Login and scanning of iSCSI devices.
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.2444] device (enp175s1v1.811): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed')
Jul 12 11:10:46 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552246.2460] audit: op="connection-delete" uuid="03d79f31-408c-4672-ab52-4127aaca18d9" name="enp175s1v1.811" pid=3742 uid=0 result="success"
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Reloading Login and scanning of iSCSI devices.
Jul 12 11:10:46 prd-ovirt-host-03 iscsiadm[28060]: iscsiadm: No records found
Jul 12 11:10:46 prd-ovirt-host-03 systemd[1]: Reloaded Login and scanning of iSCSI devices.
Jul 12 11:10:50 prd-ovirt-host-03 dbus-daemon[1746]: [system] Activating service name='org.fedoraproject.Setroubleshootd' requested by ':1.26' (uid=0 pid=1683 comm="/usr/sbin/sedispatch " label="system_u:system_r:auditd_t:s0") (using servicehelper)
Jul 12 11:10:50 prd-ovirt-host-03 dbus-daemon[28146]: [system] Failed to reset fd limit before activating service: org.freedesktop.DBus.Error.AccessDenied: Failed to restore old fd limit: Operation not permitted
Jul 12 11:10:51 prd-ovirt-host-03 dbus-daemon[1746]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd'
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9082] checkpoint[0x563c82f87450]: rollback of /org/freedesktop/NetworkManager/Checkpoint/24
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9103] manager: (enp175s1v1.811): new VLAN device (/org/freedesktop/NetworkManager/Devices/83)
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9196] device (enp175s1v1.811): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Jul 12 11:10:51 prd-ovirt-host-03 systemd-udevd[28169]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jul 12 11:10:51 prd-ovirt-host-03 kernel: IPv6: ADDRCONF(NETDEV_UP): enp175s1v1.811: link is not ready
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9321] device (enp175s1v1.811): carrier: link connected
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9388] device (enp175s1v0): state change: activated -> deactivating (reason 'user-requested', sys-iface-state: 'managed')
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9417] device (enp175s1v0.811): state change: activated -> deactivating (reason 'connection-removed', sys-iface-state: 'managed')
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9448] audit: op="checkpoint-rollback" arg="/org/freedesktop/NetworkManager/Checkpoint/24" pid=3742 uid=0 result="success"
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9455] device (enp175s1v1.811): state change: unavailable -> disconnected (reason 'user-requested', sys-iface-state: 'managed')
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9543] device (enp175s1v1.811): Activation: starting connection 'enp175s1v1.811' (03d79f31-408c-4672-ab52-4127aaca18d9)
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9544] device (enp175s1v0): state change: deactivating -> disconnected (reason 'user-requested', sys-iface-state: 'managed')
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9609] device (enp175s1v0.811): state change: deactivating -> disconnected (reason 'connection-removed', sys-iface-state: 'managed')
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9678] device (enp175s1v1.811): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:51 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552251.9684] device (enp175s1v1.811): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:52 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552252.0041] device (enp175s1v0.811): state change: disconnected -> unmanaged (reason 'user-requested', sys-iface-state: 'managed')
Jul 12 11:10:52 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552252.0052] device (enp175s1v1.811): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:52 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552252.0111] device (enp175s1v1.811): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:52 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552252.0129] device (enp175s1v1.811): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:52 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552252.0131] device (enp175s1v1.811): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Jul 12 11:10:52 prd-ovirt-host-03 NetworkManager[1871]: <info>  [1594552252.0143] device (enp175s1v1.811): Activation: successful, device activated.
Jul 12 11:10:52 prd-ovirt-host-03 systemd[1]: Reloading Login and scanning of iSCSI devices.
Jul 12 11:10:52 prd-ovirt-host-03 iscsiadm[28194]: iscsiadm: No records found
Jul 12 11:10:52 prd-ovirt-host-03 systemd[1]: Reloaded Login and scanning of iSCSI devices.
Jul 12 11:10:56 prd-ovirt-host-03 vdsm[4380]: ERROR Internal server error#012Traceback (most recent call last):#012  File "/usr/lib/python3.6/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request#012    res = method(**params)#012  File "/usr/lib/python3.6/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod#012    result = fn(*methodArgs)#012  File "<decorator-gen-480>", line 2, in setupNetworks#012  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method#012    ret = func(*args, **kwargs)#012  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 1548, in setupNetworks#012    supervdsm.getProxy().setupNetworks(networks, bondings, options)#012  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__#012    return callMethod()#012  File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>#012    **kwargs)#012  File "<string>", line 2, in setupNetworks#012  File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod#012    raise convert_to_error(kind, result)#012libnmstate.error.NmstateVerificationError: #012desired#012=======#012---#012name: enp175s1v0#012type: ethernet#012state: up#012ipv4:#012  enabled: false#012ipv6:#012  enabled: false#012mac-address: 02:00:00:00:00:01#012mtu: 9000#012#012current#012=======#012---#012name: enp175s1v0#012type: ethernet#012state: up#012ethernet:#012  auto-negotiation: false#012  duplex: full#012  speed: 10000#012ipv4:#012  enabled: false#012ipv6:#012  enabled: false#012mac-address: 02:00:00:00:00:01#012mtu: 1500#012#012difference#012==========#012--- desired#012+++ current#012@@ -2,9 +2,13 @@#012 name: enp175s1v0#012 type: ethernet#012 state: up#012+ethernet:#012+  auto-negotiation: false#012+  duplex: full#012+  speed: 10000#012 ipv4:#012   enabled: false#012 ipv6:#012   enabled: false#012 mac-address: 02:00:00:00:00:01#012-mtu: 9000#012+mtu: 1500



My oVirt Logical Network has MTU 9000, but virtual interface - MTU 1500
I've changed MTU size for physical network interface enp175s1 from 1500 to 9000, disabled virtual functions. Then enabled virtual functions again.
After that, I can assign Logicall Network to my virtual interface.

8. Finally, need reboot oVirt Host to ensure that no issue with interfaces and host activation works.


As a result, was found two issues here:
1. oVirt can't handle cases where interface name + vlan id length size greater than 15 chars
2. oVirt can't assign Logical Network with MTU size 9000 to the virtual interface if physical network configuration has MTU size 1500. Seems, this is oVirt software bug, because NetworkManager able to prepare ifcfg config with required MTU.



Ales, Dominik, as you see it is too difficult to manually activate virtual functions for network cards.
As an oVirt user, I must say that such oVirt’s behavior when working with VFs - akin to a nightmare in the production environment.
It would be just great if the oVirt team could fix this problem.
I hope, that I provided all the required information, that will help to fix the bug.

Comment 11 Ales Musil 2020-11-26 12:01:09 UTC
Hi Pavel,

I am investigating what can be done to avoid this in future. Would you please be willing to give some more info from your machine?

For further investigation I will need output from following. 

1) udevadm info /sys/class/net/enp175s0
2) cat /usr/lib/systemd/network/99-default.link

My though are that your machine uses naming called "path". In general this naming results in longer names because it includes the PCI path of device [0].
On the other hand naming called "slot" should shorten device names as it includes just PCI slot [0]. 

The default systemd config should be as follow:

NamePolicy=kernel database onboard slot path


Thanks, 
Ales

[0] https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html

Comment 12 RHEL Program Management 2020-11-26 12:01:19 UTC
The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 13 Pavel Zinchuk 2020-12-30 08:20:33 UTC
(In reply to Ales Musil from comment #11)
> Hi Pavel,
> 
> I am investigating what can be done to avoid this in future. Would you
> please be willing to give some more info from your machine?
> 
> For further investigation I will need output from following. 
> 
> 1) udevadm info /sys/class/net/enp175s0
> 2) cat /usr/lib/systemd/network/99-default.link
> 
> My though are that your machine uses naming called "path". In general this
> naming results in longer names because it includes the PCI path of device
> [0].
> On the other hand naming called "slot" should shorten device names as it
> includes just PCI slot [0]. 
> 
> The default systemd config should be as follow:
> 
> NamePolicy=kernel database onboard slot path
> 
> 
> Thanks, 
> Ales
> 
> [0]
> https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.
> html

Hi Ales,

Information that you required:
# udevadm info /sys/class/net/enp175s0
P: /devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/net/enp175s0
E: DEVPATH=/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/net/enp175s0
E: ID_BUS=pci
E: ID_MODEL_FROM_DATABASE=82599ES 10-Gigabit SFI/SFP+ Network Connection (Ethernet Server Adapter X520-2)
E: ID_MODEL_ID=0x10fb
E: ID_NET_DRIVER=ixgbe
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: ID_NET_NAME=enp175s0f0
E: ID_NET_NAME_MAC=enx001b21bd8614
E: ID_NET_NAME_PATH=enp175s0f0
E: ID_OUI_FROM_DATABASE=Intel Corporate
E: ID_PATH=pci-0000:af:00.0
E: ID_PATH_TAG=pci-0000_af_00_0
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Intel Corporation
E: ID_VENDOR_ID=0x8086
E: IFINDEX=7
E: INTERFACE=enp175s0
E: SUBSYSTEM=net
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/enp175s0 /sys/subsystem/net/devices/enp175s0
E: TAGS=:systemd:
E: USEC_INITIALIZED=7771314


# cat /usr/lib/systemd/network/99-default.link
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Link]
NamePolicy=kernel database onboard slot path
MACAddressPolicy=persistent


Note You need to log in before you can comment on or make changes to this bug.