Created attachment 1270782 [details] vdsm.log from host Description of problem: We have a new oVirt 4.1.1 cluster running with OVS networks and iSCSI hosted_storage and data domains (same target, different LUNs). Our "ovirtsan" network consists of two bonded interfaces. When we try to configure iSCSI Multiptathing within the engine, the engine tries to use "bond1" as the iface name instead of "ovirtsan." Version-Release number of selected component (if applicable): vdsm-4.19.10.1-1 How reproducible: always Steps to Reproduce: 1. create a new oVirt 4.1.1 cluster 2. connect an iSCSI storage domain 3. create an "ovirtsan" network that consists of two bonded interfaces 4. setup an iSCSI bond using ovirtsan and all of the storage targets. Actual results: vdsm creates and tries to use a "bond1" iface. Expected results: vdsm should either use the "default" iface or create and use an iface that matches the bridge ("ovirtsan"). Additional info: [root@lnxvirt02 ~]# ping -I bond1 192.168.56.50 ping: Warning: source address might be selected on device other than bond1. PING 192.168.56.50 (192.168.56.50) from 192.168.55.82 bond1: 56(84) bytes of data. ^C --- 192.168.56.50 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 1999ms [root@lnxvirt02 ~]# ping -I ovirtsan 192.168.56.50 PING 192.168.56.50 (192.168.56.50) from 192.168.56.82 ovirtsan: 56(84) bytes of data. 64 bytes from 192.168.56.50: icmp_seq=1 ttl=64 time=0.196 ms 64 bytes from 192.168.56.50: icmp_seq=2 ttl=64 time=0.884 ms ^C --- 192.168.56.50 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.196/0.540/0.884/0.344 ms [root@lnxvirt02 ~]# ovs-vsctl show d3b4d20f-2bc4-4462-a5bc-b54faa90c175 Bridge vdsmbr_BvAbEmQf Port vdsmbr_BvAbEmQf Interface vdsmbr_BvAbEmQf type: internal Port classepublic Interface classepublic type: internal Port "ens1f0" Interface "ens1f0" Bridge "vdsmbr_RTryZSV4" Port "vnet0" Interface "vnet0" Port ovirtmgmt Interface ovirtmgmt type: internal Port "bond0" Interface "bond0" Port "vdsmbr_RTryZSV4" Interface "vdsmbr_RTryZSV4" type: internal Bridge "vdsmbr_nU9Qykps" Port "bond1" Interface "bond1" Port "vdsmbr_nU9Qykps" Interface "vdsmbr_nU9Qykps" type: internal Port ovirtsan Interface ovirtsan type: internal ovs_version: "2.7.0" [root@lnxvirt02 ~]# systemctl status vdsmd -l ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2017-04-11 09:30:56 EDT; 32s ago Process: 25864 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 25921 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 25991 (vdsm) CGroup: /system.slice/vdsmd.service ├─25991 /usr/bin/python2 /usr/share/vdsm/vdsm ├─26290 /usr/bin/sudo -n /usr/sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.uid58207.001 -I bond1 -p 192.168.56.50:3260,1 -l └─26293 /usr/sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.uid58207.001 -I bond1 -p 192.168.56.50 3260 1 -l Apr 11 09:30:56 lnxvirt02.classe.cornell.edu python2[25991]: DIGEST-MD5 client step 2 Apr 11 09:30:56 lnxvirt02.classe.cornell.edu python2[25991]: DIGEST-MD5 parse_server_challenge() Apr 11 09:30:56 lnxvirt02.classe.cornell.edu python2[25991]: DIGEST-MD5 ask_user_info() Apr 11 09:30:56 lnxvirt02.classe.cornell.edu python2[25991]: DIGEST-MD5 make_client_response() Apr 11 09:30:56 lnxvirt02.classe.cornell.edu python2[25991]: DIGEST-MD5 client step 3 Apr 11 09:30:57 lnxvirt02.classe.cornell.edu vdsm[25991]: vdsm MOM WARN MOM not available. Apr 11 09:30:57 lnxvirt02.classe.cornell.edu vdsm[25991]: vdsm MOM WARN MOM not available, KSM stats will be missing. Apr 11 09:30:57 lnxvirt02.classe.cornell.edu vdsm[25991]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:30:57 lnxvirt02.classe.cornell.edu vdsm[25991]: vdsm vds WARN Not ready yet, ignoring event u'|virt|VM_status|79d400f2-b30e-4b73-8de1-6466ad6c2492' args={u'79d400f2-b30e-4b73-8de1-6466ad6c2492': {'status': 'Up', 'displayInfo': [{'tlsPort': '5900', 'ipAddress': '0', 'type': u'spice', 'port': '-1'}], 'hash': '-1866639279700397934', 'displayIp': '0', 'displayPort': '-1', 'displaySecurePort': '5900', 'timeOffset': '0', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser': '0.00', 'monitorResponse': '0', 'elapsedTime': '75010', 'displayType': 'qxl', 'cpuSys': '0.00', 'clientIp': u'', 'vcpuPeriod': 100000L}} Apr 11 09:30:57 lnxvirt02.classe.cornell.edu vdsm[25991]: vdsm vds WARN Not ready yet, ignoring event u'|virt|VM_status|79d400f2-b30e-4b73-8de1-6466ad6c2492' args={u'79d400f2-b30e-4b73-8de1-6466ad6c2492': {'status': 'Up', 'username': 'Unknown', 'memUsage': '25', 'guestFQDN': '', 'memoryStats': {u'swap_out': '0', u'majflt': '0', u'swap_usage': '0', u'mem_cached': '530452', u'mem_free': '12561040', u'mem_buffers': '99208', u'swap_in': '0', u'swap_total': '0', u'pageflt': '1019', u'mem_total': '16265800', u'mem_unused': '11931380'}, 'session': 'Unknown', 'netIfaces': [], 'guestCPUCount': -1, 'appsList': (), 'guestIPs': '', 'disksUsage': []}}
If I manually change the bond1 iface to use ovirtsan, vdsm complains but starts and works. The host can run VMS, etc. [root@lnxvirt02 ~]# iscsiadm -m iface default tcp,<empty>,<empty>,<empty>,<empty> iser iser,<empty>,<empty>,<empty>,<empty> bond1 tcp,<empty>,<empty>,bond1,<empty> [root@lnxvirt02 ~]# iscsiadm -m iface -I bond1 --op=update -n iface.net_ifacename -v ovirtsan bond1 updated. [root@lnxvirt02 ~]# iscsiadm -m iface default tcp,<empty>,<empty>,<empty>,<empty> iser iser,<empty>,<empty>,<empty>,<empty> bond1 tcp,<empty>,<empty>,ovirtsan,<empty> [root@lnxvirt02 ~]# systemctl stop vdsmd ; rm -f /var/log/vdsm/vdsm.log ; systemctl restart vdsmd [root@lnxvirt02 ~]# systemctl status vdsmd -l ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2017-04-11 09:46:43 EDT; 12min ago Process: 31878 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 31889 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 31959 (vdsm) CGroup: /system.slice/vdsmd.service ├─31959 /usr/bin/python2 /usr/share/vdsm/vdsm └─36877 /usr/libexec/ioprocess --read-pipe-fd 60 --write-pipe-fd 59 --max-threads 10 --max-queued-requests 10 Apr 11 09:50:29 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:50:44 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:50:44 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:50:59 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:50:59 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:51:14 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:51:16 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:51:29 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:51:32 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR failed to retrieve Hosted Engine HA info Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo stats = instance.get_all_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 105, in get_all_stats stats = broker.get_stats_from_storage(service) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 233, in get_stats_from_storage result = self._checked_communicate(request) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 261, in _checked_communicate .format(message or response)) RequestError: Request failed: failed to read metadata: [Errno 2] No such file or directory: '/rhev/data-center/mnt/blockSD/1d6b5e03-6bab-486c-bba2-5426d44a901b/ha_agent/hosted-engine.metadata' Apr 11 09:55:06 lnxvirt02.classe.cornell.edu vdsm[31959]: vdsm root ERROR iSCSI netIfaceName coming from engine [bond1] is different from iface.net_ifacename present on the system [ovirtsan]. Aborting iscsi iface [bond1] configuration.
Created attachment 1270786 [details] vdsm.log from host after manually renaming iface
From what I'm seeing you are using the experimental OVS switch type. Can you reproduce with the standard networking?
Thanks, Yaniv. It wasn't clear to me that OVS was still experimental after reading the docs and looking at the UI ("Legacy" vs "OVS"). I'll followup after trying to reproduce this with the Legacy switch type. Do you have any sense of when OVS will be fully supported and no-longer considered experimental? Thanks again, Devin
With Legacy networking, vdsm does not create a bridge for the iSCSI Bond network. Therefore, the "bond1" iface and the iSCSI bond work just fine. I guess we'll have to revert to legacy networking for now. If it would help, I could submit a new bug that accurately characterizes the problem with iSCSI Bonds and OVS switches (namely, that a bridge is created for the iSCSI Bond network when it shouldn't be). Thanks again, Devin
(In reply to Devin Bougie from comment #5) > With Legacy networking, vdsm does not create a bridge for the iSCSI Bond > network. Therefore, the "bond1" iface and the iSCSI bond work just fine. In both switch types a bridge may exist: with OVS networking the bridge is always there and with the Linux bridge it is there only when the network is marked as a VM network. I guess the network you defined is not marked as a VM network (please confirm). Just to confirm the problem is indeed happening only when the network is a bridged one, please mark it as a VM network and check again if you can.
(In reply to Edward Haas from comment #6) > In both switch types a bridge may exist: with OVS networking the bridge is > always there and with the Linux bridge it is there only when the network is > marked as a VM network. > I guess the network you defined is not marked as a VM network (please > confirm). Yes, that is correct. > Just to confirm the problem is indeed happening only when the network is a > bridged one, please mark it as a VM network and check again if you can. Yes, the problem does happen with Legacy switches when the iSCSI Bond network is marked as a VM network. Once the bridge is created, the bond1 iface no longer works and would need to be changed to match the bridge interface name (in my case, ovirtsan). I'm attaching vdsm logs from a host a few minutes before and after the network is marked as a VM network. Thanks for helping clarify the problem, and please let me know if there's anything else I can do to help. Many thanks, Devin
Created attachment 1271437 [details] vdsm.log from a host before and after the iSCSI Bond network is marked as a VM network.
(In reply to Devin Bougie from comment #7) > > Just to confirm the problem is indeed happening only when the network is a > > bridged one, please mark it as a VM network and check again if you can. > > Yes, the problem does happen with Legacy switches when the iSCSI Bond > network is marked as a VM network. Once the bridge is created, the bond1 > iface no longer works and would need to be changed to match the bridge > interface name (in my case, ovirtsan). > We have not managed to recreate this with the linux bridge, it seems to work well with it in our tests. Could you please recheck? Perhaps some bridge leftovers exist from the OVS setup and the multipath tries to use those instead of the new linux bridge.
Thanks for taking a look. I actually deleted openvswitch and rebooted the hosts, so I'm pretty sure there were no OVS remnants left hanging around. Unfortunately I've already reverted my config and moved my cluster into production, so I don't have a good way to test this again at the moment. As we've now reverted to legacy networking and don't bridge our iSCSI SAN, this is no-longer an issue for us. It will only be a potential issue when OVS becomes officially supported and we plan a migration from Legacy networking to OVS. Please let me know if there is any more information I can provide or anything else I can do to help. Thanks again, Devin
Hannes, is there a known issue of iSCSI on top of openvswitch?
Hi Dan, not that I know of. It should work. Everything else is a bug. Could you describe the problem. I couldn't follow the bug description so far. Maybe, if you can isolate the problem down to the kernel open a separate bug for that? Thanks!
Sorry Hannes (and others) for the distraction. As Devin reported in comment 1, the issue is completely in oVirt, not in multipath or openvswitch. I believe that if Engine's connectStorageServer command had ifaceName=default and netIfaceName=northboud_network_device (the one with the ip address), iscsi bond over openvswitch network would work. Maor, can you try that?
Hi Dan, I have tried to change the engine part which initialize the iface sent to VDSM so it will send default, although it seems to fail for the following reason: "iscsiadm: Updating iface while iscsi sessions are using it. You must logout the running sessions then log back in for the new settings to take affect." Seems that this is the command that we are running from iscsiadm: _runCmd(["-m", "iface", "-I", name, "-n", key, "-v", value, "--op=update"]) Here is the request sent from the engine with iface='default', and netIfaceName='new_network': START, ConnectStorageServerVDSCommand(HostName = pluto-vdsb.eng.lab.tlv.redhat.com, StorageServerConnectionManagementVDSParameters:{runAsync='true', hostId='c59c741d-a683-4ceb-96a8-842f049a42c7', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='ISCSI', connectionList='[StorageServerConnections:{id='af4703b1-e55b-400f-bb3e-48e77ce56fca', connection='10.35.16.55', iqn='iqn.2015-07.com.mlipchuk3.redhat:444', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='default', netIfaceName='new_network'}, StorageServerConnections:{id='37442c4a-f86b-46c8-a8e9-f4bfb6365e1b', connection='10.35.16.55', iqn='iqn.2015-07.com.mlipchuk1.redhat:444', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='default', netIfaceName='new_network'}, StorageServerConnections:{id='7c174575-7b9f-4dc6-afd7-997d3466bb42', connection='10.35.16.55', iqn='iqn.2015-07.com.mlipchuk2.redhat:444', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='default', netIfaceName='new_network'}, StorageServerConnections:{id='c35c6156-8a75-4cf4-80e4-ebc8310edb03', connection='10.35.16.55', iqn='iqn.2015-07.com.mlipchuk3.redhat:444', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='default', netIfaceName='new_network2'}, StorageServerConnections:{id='b63256cd-4354-46a9-a483-aa5a86756390', connection='10.35.16.55', iqn='iqn.2015-07.com.mlipchuk1.redhat:444', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='default', netIfaceName='new_network2'}, StorageServerConnections:{id='ee312b67-1f2e-4797-af64-fd9e00586268', connection='10.35.16.55', iqn='iqn.2015-07.com.mlipchuk2.redhat:444', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='default', netIfaceName='new_network2'}]'}), log id: 1668527b I can see that in VDSM we get the following error: IscsiInterfaceUpdateError: ('default', 7, [], ['iscsiadm: Updating iface while iscsi sessions are using it. You must logout the running sessions then log back in for the new settings to take affect.', 'iscsiadm: iface default is a special interface and cannot be modified.', 'iscsiadm: Could not update iface default: invalid parameter']) and that is the error we get in the engine part: ConnectStorageServerVDS failed: Error storage server connection: (u"domType=3, spUUID=00000000-0000-0000-0000-000000000000, conList=[{u'netIfaceName': u'new_network', u'id': u'af4703b1-e55b-400f-bb3e-48e77ce56fca', u'connection': u'10.35.16.55', u'iqn': u'iqn.2015-07.com.mlipchuk3.redhat:444', u'user': u'', u'tpgt': u'1', u'ifaceName': u'default', u'password': '********', u'port': u'3260'}, {u'netIfaceName': u'new_network', u'id': u'37442c4a-f86b-46c8-a8e9-f4bfb6365e1b', u'connection': u'10.35.16.55', u'iqn': u'iqn.2015-07.com.mlipchuk1.redhat:444', u'user': u'', u'tpgt': u'1', u'ifaceName': u'default', u'password': '********', u'port': u'3260'}, {u'netIfaceName': u'new_network', u'id': u'7c174575-7b9f-4dc6-afd7-997d3466bb42', u'connection': u'10.35.16.55', u'iqn': u'iqn.2015-07.com.mlipchuk2.redhat:444', u'user': u'', u'tpgt': u'1', u'ifaceName': u'default', u'password': '********', u'port': u'3260'}, {u'netIfaceName': u'new_network2', u'id': u'c35c6156-8a75-4cf4-80e4-ebc8310edb03', u'connection': u'10.35.16.55', u'iqn': u'iqn.2015-07.com.mlipchuk3.redhat:444', u'user': u'', u'tpgt': u'1', u'ifaceName': u'default', u'password': '********', u'port': u'3260'}, {u'netIfaceName': u'new_network2', u'id': u'b63256cd-4354-46a9-a483-aa5a86756390', u'connection': u'10.35.16.55', u'iqn': u'iqn.2015-07.com.mlipchuk1.redhat:444', u'user': u'', u'tpgt': u'1', u'ifaceName': u'default', u'password': '********', u'port': u'3260'}, {u'netIfaceName': u'new_network2', u'id': u'ee312b67-1f2e-4797-af64-fd9e00586268', u'connection': u'10.35.16.55', u'iqn': u'iqn.2015-07.com.mlipchuk2.redhat:444', u'user': u'', u'tpgt': u'1', u'ifaceName': u'default', u'password': '********', u'port': u'3260'}]",) I also attached the logs
Created attachment 1291656 [details] engine log
Created attachment 1291657 [details] vdsm log
We didn't get to this bug for more than 2 years, and it's not being considered for the upcoming 4.4. It's unlikely that it will ever be addressed so I'm suggesting to close it. If you feel this needs to be addressed and want to work on it please remove cond nack and target accordingly.
If we are using nmstate to manage OVS hosts, we should reproduce this.
This bug has not been prioritized or updated for a long time and therefore deemed stale. Closing for now, please feel free to update and reopen, but kindly provide justification or development plan how/when to address this bug