Previously, VDSM mistakenly assumed that iface name and iface initiator name were the same, so if the Admin configured a specific initiator name in the host, its value was mistakenly overwritten by ifaceName in the iface file. In such
a cases the host failed to establish the iSCSI connection with the target. Now, iface name and iface initiator are appropriately designated so that iSCSI connections can be made correctly.
Created attachment 876250[details]
engine and vdsm logs
Description of problem:
I have a host configured with 2 networks, which are part of iscsi multipathing, connected to a storage server which contains an iscsi storage domain.
Storage domain activation failed on vdsm. It failed during the getVGInfo phase
Version-Release number of selected component (if applicable):
RHEV3.4-AV3
vdsm-4.14.2-0.4.el6ev.x86_64
rhevm-3.4.0-0.5.master.el6ev.noarch
How reproducible:
Always
Steps to Reproduce:
On a shared DC:
- have 1 host with more than 1 connected NICs
- have an iscsi storage server with multiple targets
1. Add a new network the the cluster
2. Add the network to the host via RHEVM, verify that storage server is reachable to host via its 2 network. In my case, I had a 'rhevm' bridge and a new 'iscsi1' bridge networks configured on my host.
3. Create and activate an iscsi storage domain
4. Under 'data-cennters' main tab in UI, go to 'iscsi multipathing' sub-tab and configure a new multipath connection. Add both host's networks and all the storage server's targets.
5. Maintenance the iscsi storage domain
6. Activate the domain
Actual results:
ActivateStorageDomain failed because 'vgs' command failed on vdsm:
vgs failure in vdsm.log:
Thread-76::DEBUG::2014-03-19 10:33:04,411::lvm::295::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_s
tate=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ \'a|/dev/mapper/3514f0c59af400076|/dev/mapper/3514f0c59af400094|/dev/mapper/3514f0c59af40010a|/dev/mapper/360060160f4a030008e918470a37be
311|/dev/mapper/360060160f4a0300090918470a37be311|/dev/mapper/360060160f4a0300092918470a37be311|/dev/mapper/360060160f4a0300094918470a37be311|/dev/mapper/360060160f4a03000949bb85f567ce311|/dev/mapper/360060160f4a0
300096918470a37be311|/dev/mapper/360060160f4a030009e73bb88a37be311|/dev/mapper/360060160f4a03000a073bb88a37be311|/dev/mapper/360060160f4a03000a273bb88a37be311|/dev/mapper/360060160f4a03000a473bb88a37be311|/dev/map
per/360060160f4a03000a673bb88a37be311|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --s
eparator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0' (cwd None)
Thread-76::DEBUG::2014-03-19 10:33:04,638::lvm::295::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 53687025664: Input/output error\n /dev/mapper/3514f
0c59af40010a: read failed after 0 of 4096 at 53687083008: Input/output error\n /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 0: Input/output error\n WARNING: Error counts reached a limit of 3. De
vice /dev/mapper/3514f0c59af40010a was disabled\n Volume group "c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0" not found\n Skipping volume group c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0\n'; <rc> = 5
Thread-76::WARNING::2014-03-19 10:33:04,643::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 53687025664: Input/output error', ' /dev/map
per/3514f0c59af40010a: read failed after 0 of 4096 at 53687083008: Input/output error', ' /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 0: Input/output error', ' WARNING: Error counts reached a l
imit of 3. Device /dev/mapper/3514f0c59af40010a was disabled', ' Volume group "c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0" not found', ' Skipping volume group c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0']
Thread-76::ERROR::2014-03-19 10:33:04,644::sdc::143::Storage.StorageDomainCache::(_findDomain) domain c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0 not found
on engine.log
2014-03-19 10:33:04,372 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-4-thread-36) [452643d3] Command GetVGInfoVDSCommand(HostName = green-vdsa, HostId = 5b9fefa2-de39-450b-93b9-a8b5a228fadb, VGID=2RNFZd-wP80-KRBq-QeOL-gyrc-Oyl4-YEMYcf) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVGInfoVDS, error = Volume Group does not exist: ('vg_uuid: 2RNFZd-wP80-KRBq-QeOL-gyrc-Oyl4-YEMYcf',), code = 506
Multipath configuration on vdsm:
[root@green-vdsa ifaces]# cat /var/lib/iscsi/ifaces/eth0
# BEGIN RECORD 6.2.0-873.10.el6
iface.iscsi_ifacename = eth0
iface.transport_name = tcp
iface.initiatorname = eth0
iface.vlan_id = 0
iface.vlan_priority = 0
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
# END RECORD
[root@green-vdsa ifaces]# cat /var/lib/iscsi/ifaces/eth1
# BEGIN RECORD 6.2.0-873.10.el6
iface.iscsi_ifacename = eth1
iface.transport_name = tcp
iface.initiatorname = eth1
iface.vlan_id = 0
iface.vlan_priority = 0
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
# END RECORD
vgs while host has iscsi multipath configured:
VG #PV #LV #SN Attr VSize VFree
1d11607f-6e7e-48df-908f-4b28913aad9d 1 23 0 wz--n- 99.62g 61.00g
2d1320db-fa04-44bd-9cc2-50da94620d2b 1 16 0 wz--n- 99.62g 79.75g
6e570742-5756-4e93-b0fa-58383102ebfd 1 22 0 wz--n- 49.62g 3.38g
90122060-94b1-49d9-b50f-f96770e8822e 1 6 0 wz--n- 49.62g 45.75g
a17635fa-bf8d-4d9c-8389-f82a9e05b3c3 1 6 0 wz--n- 49.62g 45.75g
vg0 1 3 0 wz--n- 67.77g 0
vg0 1 2 0 wz--n- 100.00g 0
=============
Expected results:
Storage domain activation should succeed in case iscsi multipathing is configured
Additional info:
engine and vdsm logs
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2014-0504.html
Created attachment 876250 [details] engine and vdsm logs Description of problem: I have a host configured with 2 networks, which are part of iscsi multipathing, connected to a storage server which contains an iscsi storage domain. Storage domain activation failed on vdsm. It failed during the getVGInfo phase Version-Release number of selected component (if applicable): RHEV3.4-AV3 vdsm-4.14.2-0.4.el6ev.x86_64 rhevm-3.4.0-0.5.master.el6ev.noarch How reproducible: Always Steps to Reproduce: On a shared DC: - have 1 host with more than 1 connected NICs - have an iscsi storage server with multiple targets 1. Add a new network the the cluster 2. Add the network to the host via RHEVM, verify that storage server is reachable to host via its 2 network. In my case, I had a 'rhevm' bridge and a new 'iscsi1' bridge networks configured on my host. 3. Create and activate an iscsi storage domain 4. Under 'data-cennters' main tab in UI, go to 'iscsi multipathing' sub-tab and configure a new multipath connection. Add both host's networks and all the storage server's targets. 5. Maintenance the iscsi storage domain 6. Activate the domain Actual results: ActivateStorageDomain failed because 'vgs' command failed on vdsm: vgs failure in vdsm.log: Thread-76::DEBUG::2014-03-19 10:33:04,411::lvm::295::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_s tate=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ \'a|/dev/mapper/3514f0c59af400076|/dev/mapper/3514f0c59af400094|/dev/mapper/3514f0c59af40010a|/dev/mapper/360060160f4a030008e918470a37be 311|/dev/mapper/360060160f4a0300090918470a37be311|/dev/mapper/360060160f4a0300092918470a37be311|/dev/mapper/360060160f4a0300094918470a37be311|/dev/mapper/360060160f4a03000949bb85f567ce311|/dev/mapper/360060160f4a0 300096918470a37be311|/dev/mapper/360060160f4a030009e73bb88a37be311|/dev/mapper/360060160f4a03000a073bb88a37be311|/dev/mapper/360060160f4a03000a273bb88a37be311|/dev/mapper/360060160f4a03000a473bb88a37be311|/dev/map per/360060160f4a03000a673bb88a37be311|\', \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --s eparator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0' (cwd None) Thread-76::DEBUG::2014-03-19 10:33:04,638::lvm::295::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 53687025664: Input/output error\n /dev/mapper/3514f 0c59af40010a: read failed after 0 of 4096 at 53687083008: Input/output error\n /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 0: Input/output error\n WARNING: Error counts reached a limit of 3. De vice /dev/mapper/3514f0c59af40010a was disabled\n Volume group "c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0" not found\n Skipping volume group c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0\n'; <rc> = 5 Thread-76::WARNING::2014-03-19 10:33:04,643::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 53687025664: Input/output error', ' /dev/map per/3514f0c59af40010a: read failed after 0 of 4096 at 53687083008: Input/output error', ' /dev/mapper/3514f0c59af40010a: read failed after 0 of 4096 at 0: Input/output error', ' WARNING: Error counts reached a l imit of 3. Device /dev/mapper/3514f0c59af40010a was disabled', ' Volume group "c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0" not found', ' Skipping volume group c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0'] Thread-76::ERROR::2014-03-19 10:33:04,644::sdc::143::Storage.StorageDomainCache::(_findDomain) domain c5db74fd-5dc8-4d1d-bd6e-d10d38c2c8a0 not found on engine.log 2014-03-19 10:33:04,372 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-4-thread-36) [452643d3] Command GetVGInfoVDSCommand(HostName = green-vdsa, HostId = 5b9fefa2-de39-450b-93b9-a8b5a228fadb, VGID=2RNFZd-wP80-KRBq-QeOL-gyrc-Oyl4-YEMYcf) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVGInfoVDS, error = Volume Group does not exist: ('vg_uuid: 2RNFZd-wP80-KRBq-QeOL-gyrc-Oyl4-YEMYcf',), code = 506 Multipath configuration on vdsm: [root@green-vdsa ifaces]# cat /var/lib/iscsi/ifaces/eth0 # BEGIN RECORD 6.2.0-873.10.el6 iface.iscsi_ifacename = eth0 iface.transport_name = tcp iface.initiatorname = eth0 iface.vlan_id = 0 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD [root@green-vdsa ifaces]# cat /var/lib/iscsi/ifaces/eth1 # BEGIN RECORD 6.2.0-873.10.el6 iface.iscsi_ifacename = eth1 iface.transport_name = tcp iface.initiatorname = eth1 iface.vlan_id = 0 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD vgs while host has iscsi multipath configured: VG #PV #LV #SN Attr VSize VFree 1d11607f-6e7e-48df-908f-4b28913aad9d 1 23 0 wz--n- 99.62g 61.00g 2d1320db-fa04-44bd-9cc2-50da94620d2b 1 16 0 wz--n- 99.62g 79.75g 6e570742-5756-4e93-b0fa-58383102ebfd 1 22 0 wz--n- 49.62g 3.38g 90122060-94b1-49d9-b50f-f96770e8822e 1 6 0 wz--n- 49.62g 45.75g a17635fa-bf8d-4d9c-8389-f82a9e05b3c3 1 6 0 wz--n- 49.62g 45.75g vg0 1 3 0 wz--n- 67.77g 0 vg0 1 2 0 wz--n- 100.00g 0 ============= Expected results: Storage domain activation should succeed in case iscsi multipathing is configured Additional info: engine and vdsm logs