Created attachment 1015107 [details] iscsi-lun-rhevh7.png Description of problem: 1. Boot lun should not be listed in iSCSI storage domain on rhevh 7.1 multipath host 2. Another issue is all luns will be displayed twice in the multipath machine. In the screenshot, the multipath luns have 2 paths, and they have the same serial id. but they still should not be displayed twice. Version-Release number of selected component (if applicable): rhev-hypervisor7-7.1-20150414.63 ovirt-node-3.2.2-3.el7.noarch vdsm-4.16.13.1-1.el7ev.x86_64 RHEVM VT14.3 (3.5.1-0.4) How reproducible: 100% Steps to Reproduce: 1. Install rhev-hypervisor7-7.1-20150414.63 on multipath machine. 2. Register to RHEVM 3. Connect iSCSI storage. 4. Focus on pop-up window. Actual results: 1. Boot lun can be listed in iSCSI storage domain on rhevh 7.1 multipath host. 2. All luns will be displayed twice in the multipath machine also they have the same serial id. Expected results: 1. Boot lun should not be listed in iSCSI storage domain on rhevh 7.1 multipath host. 2. All luns only display once in the multipath machine. Additional info: ========================================================= # lsblk --nodeps -o name,serial NAME SERIAL sda 5000c5001d5b2973 sdb 60a9800050334c33424b334163434546 sdc 60a9800050334c33424b334166784f55 sdd 60a9800050334c33424b334167714852 sde 60a9800050334c33424b334167742f70 sdf 60a9800050334c33424b334167756648 sdg 60a9800050334c33424b334163434546 sdh 60a9800050334c33424b334166784f55 sdi 60a9800050334c33424b334167714852 sdj 60a9800050334c33424b334167742f70 sdk 60a9800050334c33424b334167756648 sr0 005CD005080 sr1 110052081500 loop0 loop1 loop2 ========================================================= [root@hp-z600-03 admin]# multipath -ll Apr 16 06:51:57 | multipath.conf +5, invalid keyword: getuid_callout Apr 16 06:51:57 | multipath.conf +18, invalid keyword: getuid_callout Apr 16 06:51:57 | multipath.conf +37, invalid keyword: getuid_callout 35000c5001d5b2973 dm-23 SEAGATE ,ST3146356SS size=137G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active `- 6:0:0:0 sda 8:0 active ready running 360a9800050334c33424b334166784f55 dm-0 NETAPP ,LUN size=19G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=2 status=active |- 0:0:0:1 sdc 8:32 active ready running `- 0:0:1:1 sdh 8:112 active ready running 360a9800050334c33424b334163434546 dm-1 NETAPP ,LUN size=25G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=2 status=active |- 0:0:0:0 sdb 8:16 active ready running `- 0:0:1:0 sdg 8:96 active ready running 360a9800050334c33424b334167714852 dm-2 NETAPP ,LUN size=1021M features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=2 status=active |- 0:0:0:2 sdd 8:48 active ready running `- 0:0:1:2 sdi 8:128 active ready running 360a9800050334c33424b334167742f70 dm-3 NETAPP ,LUN size=2.0G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=2 status=active |- 0:0:0:3 sde 8:64 active ready running `- 0:0:1:3 sdj 8:144 active ready running 360a9800050334c33424b334167756648 dm-4 NETAPP ,LUN size=3.0G features='3 pg_init_retries 50 retain_attached_hw_handler' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=2 status=active |- 0:0:0:4 sdf 8:80 active ready running `- 0:0:1:4 sdk 8:160 active ready running ========================================================= [root@hp-z600-03 admin]# lsblk -o name,serial NAME SERIAL sda 5000c5001d5b2973 └─35000c5001d5b2973 ├─35000c5001d5b2973p1 ├─35000c5001d5b2973p2 ├─35000c5001d5b2973p3 └─35000c5001d5b2973p4 sdb 60a9800050334c33424b334163434546 └─360a9800050334c33424b334163434546 ├─360a9800050334c33424b334163434546p1 ├─360a9800050334c33424b334163434546p2 ├─360a9800050334c33424b334163434546p3 └─360a9800050334c33424b334163434546p4 ├─HostVG-Swap ├─HostVG-Config ├─HostVG-Logging └─HostVG-Data sdc 60a9800050334c33424b334166784f55 └─360a9800050334c33424b334166784f55 ├─29cff6c8--2135--438b--aa76--86be1acba1a6-metadata ├─29cff6c8--2135--438b--aa76--86be1acba1a6-outbox ├─29cff6c8--2135--438b--aa76--86be1acba1a6-leases ├─29cff6c8--2135--438b--aa76--86be1acba1a6-ids ├─29cff6c8--2135--438b--aa76--86be1acba1a6-inbox └─29cff6c8--2135--438b--aa76--86be1acba1a6-master sdd 60a9800050334c33424b334167714852 └─360a9800050334c33424b334167714852 sde 60a9800050334c33424b334167742f70 └─360a9800050334c33424b334167742f70 sdf 60a9800050334c33424b334167756648 └─360a9800050334c33424b334167756648 sdg 60a9800050334c33424b334163434546 └─360a9800050334c33424b334163434546 ├─360a9800050334c33424b334163434546p1 ├─360a9800050334c33424b334163434546p2 ├─360a9800050334c33424b334163434546p3 └─360a9800050334c33424b334163434546p4 ├─HostVG-Swap ├─HostVG-Config ├─HostVG-Logging └─HostVG-Data sdh 60a9800050334c33424b334166784f55 └─360a9800050334c33424b334166784f55 ├─29cff6c8--2135--438b--aa76--86be1acba1a6-metadata ├─29cff6c8--2135--438b--aa76--86be1acba1a6-outbox ├─29cff6c8--2135--438b--aa76--86be1acba1a6-leases ├─29cff6c8--2135--438b--aa76--86be1acba1a6-ids ├─29cff6c8--2135--438b--aa76--86be1acba1a6-inbox └─29cff6c8--2135--438b--aa76--86be1acba1a6-master sdi 60a9800050334c33424b334167714852 └─360a9800050334c33424b334167714852 sdj 60a9800050334c33424b334167742f70 └─360a9800050334c33424b334167742f70 sdk 60a9800050334c33424b334167756648 └─360a9800050334c33424b334167756648 sr0 005CD005080 sr1 110052081500 loop0 loop1 ├─live-rw └─live-base loop2 └─live-rw [root@hp-z600-03 admin]# [root@hp-z600-03 admin]# [root@hp-z600-03 admin]# [root@hp-z600-03 admin]# blkid -L Root /dev/mapper/360a9800050334c33424b334163434546p3 ========================================================= # vdsClient -s 0 getDeviceList [{'GUID': '360a9800050334c33424b334166784f55', 'capacity': '20403191808', 'devtype': 'iSCSI', 'fwrev': '7320', 'logicalblocksize': '512', 'pathlist': [{'connection': '10.66.90.116', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1001'}, {'connection': '10.66.90.115', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1000'}], 'pathstatus': [{'lun': '1', 'physdev': 'sdc', 'state': 'active', 'type': 'iSCSI'}, {'lun': '1', 'physdev': 'sdh', 'state': 'active', 'type': 'iSCSI'}], 'physicalblocksize': '512', 'productID': 'LUN', 'pvUUID': 'wFqC21-bAze-VMQz-lch3-uxZ5-nqmP-b3ffCr', 'serial': 'SNETAPP_LUN_P3L3BK3AfxOU', 'status': 'used', 'vendorID': 'NETAPP', 'vgUUID': '3NhdnC-kkuC-qMdZ-ynRT-Xazk-df7g-hWGoEJ'}, {'GUID': '360a9800050334c33424b334163434546', 'capacity': '26684162048', 'devtype': 'iSCSI', 'fwrev': '7320', 'logicalblocksize': '512', 'pathlist': [{'connection': '10.66.90.116', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1001'}, {'connection': '10.66.90.115', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1000'}], 'pathstatus': [{'lun': '0', 'physdev': 'sdb', 'state': 'active', 'type': 'iSCSI'}, {'lun': '0', 'physdev': 'sdg', 'state': 'active', 'type': 'iSCSI'}], 'physicalblocksize': '512', 'productID': 'LUN', 'pvUUID': '', 'serial': 'SNETAPP_LUN_P3L3BK3AcCEF', 'status': 'used', 'vendorID': 'NETAPP', 'vgUUID': ''}, {'GUID': '360a9800050334c33424b334167714852', 'capacity': '1070596096', 'devtype': 'iSCSI', 'fwrev': '7320', 'logicalblocksize': '512', 'pathlist': [{'connection': '10.66.90.116', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1001'}, {'connection': '10.66.90.115', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1000'}], 'pathstatus': [{'lun': '2', 'physdev': 'sdd', 'state': 'active', 'type': 'iSCSI'}, {'lun': '2', 'physdev': 'sdi', 'state': 'active', 'type': 'iSCSI'}], 'physicalblocksize': '512', 'productID': 'LUN', 'pvUUID': '', 'serial': 'SNETAPP_LUN_P3L3BK3AgqHR', 'status': 'free', 'vendorID': 'NETAPP', 'vgUUID': ''}, {'GUID': '360a9800050334c33424b334167742f70', 'capacity': '2142240768', 'devtype': 'iSCSI', 'fwrev': '7320', 'logicalblocksize': '512', 'pathlist': [{'connection': '10.66.90.116', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1001'}, {'connection': '10.66.90.115', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1000'}], 'pathstatus': [{'lun': '3', 'physdev': 'sde', 'state': 'active', 'type': 'iSCSI'}, {'lun': '3', 'physdev': 'sdj', 'state': 'active', 'type': 'iSCSI'}], 'physicalblocksize': '512', 'productID': 'LUN', 'pvUUID': '', 'serial': 'SNETAPP_LUN_P3L3BK3Agt_p', 'status': 'free', 'vendorID': 'NETAPP', 'vgUUID': ''}, {'GUID': '360a9800050334c33424b334167756648', 'capacity': '3212836864', 'devtype': 'iSCSI', 'fwrev': '7320', 'logicalblocksize': '512', 'pathlist': [{'connection': '10.66.90.116', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1001'}, {'connection': '10.66.90.115', 'initiatorname': '(null)', 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', 'port': '3260', 'portal': '1000'}], 'pathstatus': [{'lun': '4', 'physdev': 'sdf', 'state': 'active', 'type': 'iSCSI'}, {'lun': '4', 'physdev': 'sdk', 'state': 'active', 'type': 'iSCSI'}], 'physicalblocksize': '512', 'productID': 'LUN', 'pvUUID': '', 'serial': 'SNETAPP_LUN_P3L3BK3AgufH', 'status': 'free', 'vendorID': 'NETAPP', 'vgUUID': ''}, {'GUID': '35000c5001d5b2973', 'capacity': '146815737856', 'devtype': 'FCP', 'fwrev': 'HPS2', 'logicalblocksize': '512', 'pathlist': [], 'pathstatus': [{'lun': '0', 'physdev': 'sda', 'state': 'active', 'type': 'FCP'}], 'physicalblocksize': '512', 'productID': 'ST3146356SS', 'pvUUID': '', 'serial': 'SSEAGATE_ST3146356SS_3QN3BEZL00009030FJ78', 'status': 'used', 'vendorID': 'SEAGATE', 'vgUUID': ''}]
Created attachment 1015108 [details] iscsi.tar.gz
> Description of problem: > 1. Boot lun should not be listed in iSCSI storage domain on rhevh 7.1 > multipath host What is the output of "df -T /" on this machine? > 2. Another issue is all luns will be displayed twice in the multipath > machine. > In the screenshot, the multipath luns have 2 paths, and they have the same > serial id. but they still should not be displayed twice. In the screen shot, we see two targets: 10.66.90.116 10.66.90.115 We see the same luns on both targets: sdc 60a9800050334c33424b334166784f55 sdh 60a9800050334c33424b334166784f55 sdb 60a9800050334c33424b334163434546 sdg 60a9800050334c33424b334163434546 sdd 60a9800050334c33424b334167714852 sdi 60a9800050334c33424b334167714852 And same in multipath: > [root@hp-z600-03 admin]# multipath -ll > 360a9800050334c33424b334166784f55 dm-0 NETAPP ,LUN > size=19G features='3 pg_init_retries 50 retain_attached_hw_handler' > hwhandler='0' wp=rw > `-+- policy='service-time 0' prio=2 status=active > |- 0:0:0:1 sdc 8:32 active ready running > `- 0:0:1:1 sdh 8:112 active ready running > 360a9800050334c33424b334163434546 dm-1 NETAPP ,LUN > size=25G features='3 pg_init_retries 50 retain_attached_hw_handler' > hwhandler='0' wp=rw > `-+- policy='service-time 0' prio=2 status=active > |- 0:0:0:0 sdb 8:16 active ready running > `- 0:0:1:0 sdg 8:96 active ready running > 360a9800050334c33424b334167714852 dm-2 NETAPP ,LUN > size=1021M features='3 pg_init_retries 50 retain_attached_hw_handler' > hwhandler='0' wp=rw > `-+- policy='service-time 0' prio=2 status=active > |- 0:0:0:2 sdd 8:48 active ready running > `- 0:0:1:2 sdi 8:128 active ready running vdsClient output is also exppected: > # vdsClient -s 0 getDeviceList > [{'GUID': '360a9800050334c33424b334166784f55', > 'capacity': '20403191808', > 'devtype': 'iSCSI', > 'fwrev': '7320', > 'logicalblocksize': '512', > 'pathlist': [{'connection': '10.66.90.116', > 'initiatorname': '(null)', > 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', > 'port': '3260', > 'portal': '1001'}, > {'connection': '10.66.90.115', > 'initiatorname': '(null)', > 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', > 'port': '3260', > 'portal': '1000'}], > 'pathstatus': [{'lun': '1', > 'physdev': 'sdc', > 'state': 'active', > 'type': 'iSCSI'}, > {'lun': '1', > 'physdev': 'sdh', > 'state': 'active', > 'type': 'iSCSI'}], > 'physicalblocksize': '512', > 'productID': 'LUN', > 'pvUUID': 'wFqC21-bAze-VMQz-lch3-uxZ5-nqmP-b3ffCr', > 'serial': 'SNETAPP_LUN_P3L3BK3AfxOU', > 'status': 'used', > 'vendorID': 'NETAPP', > 'vgUUID': '3NhdnC-kkuC-qMdZ-ynRT-Xazk-df7g-hWGoEJ'}, > {'GUID': '360a9800050334c33424b334163434546', > 'capacity': '26684162048', > 'devtype': 'iSCSI', > 'fwrev': '7320', > 'logicalblocksize': '512', > 'pathlist': [{'connection': '10.66.90.116', > 'initiatorname': '(null)', > 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', > 'port': '3260', > 'portal': '1001'}, > {'connection': '10.66.90.115', > 'initiatorname': '(null)', > 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', > 'port': '3260', > 'portal': '1000'}], > 'pathstatus': [{'lun': '0', > 'physdev': 'sdb', > 'state': 'active', > 'type': 'iSCSI'}, > {'lun': '0', > 'physdev': 'sdg', > 'state': 'active', > 'type': 'iSCSI'}], > 'physicalblocksize': '512', > 'productID': 'LUN', > 'pvUUID': '', > 'serial': 'SNETAPP_LUN_P3L3BK3AcCEF', > 'status': 'used', > 'vendorID': 'NETAPP', > 'vgUUID': ''}, > {'GUID': '360a9800050334c33424b334167714852', > 'capacity': '1070596096', > 'devtype': 'iSCSI', > 'fwrev': '7320', > 'logicalblocksize': '512', > 'pathlist': [{'connection': '10.66.90.116', > 'initiatorname': '(null)', > 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', > 'port': '3260', > 'portal': '1001'}, > {'connection': '10.66.90.115', > 'initiatorname': '(null)', > 'iqn': 'iqn.1992-08.com.netapp:sn.135053389', > 'port': '3260', > 'portal': '1000'}], > 'pathstatus': [{'lun': '2', > 'physdev': 'sdd', > 'state': 'active', > 'type': 'iSCSI'}, > {'lun': '2', > 'physdev': 'sdi', > 'state': 'active', > 'type': 'iSCSI'}], > 'physicalblocksize': '512', > 'productID': 'LUN', > 'pvUUID': '', > 'serial': 'SNETAPP_LUN_P3L3BK3AgqHR', > 'status': 'free', > 'vendorID': 'NETAPP', > 'vgUUID': ''}, shaochen , can you explain why this is a bug? This is how RHEV/oVirt display devices. Why is this wrong? How does it effect the usage of the system?
> > What is the output of "df -T /" on this machine? # df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/live-rw ext2 1548144 610224 922200 40% / devtmpfs devtmpfs 12186028 0 12186028 0% /dev tmpfs tmpfs 12205492 0 12205492 0% /dev/shm tmpfs tmpfs 12205492 25496 12179996 1% /run tmpfs tmpfs 12205492 0 12205492 0% /sys/fs/cgroup /dev/mapper/360a9800050334c33424b334163434546p3 ext2 4133360 228204 3695188 6% /run/initramfs/live none tmpfs 12205492 61556 12143936 1% /var/lib/stateless/writable none tmpfs 12205492 61556 12143936 1% /var/cache/man none tmpfs 1998672 24236 1853196 2% /var/log none tmpfs 12205492 61556 12143936 1% /var/lib/dbus none tmpfs 12205492 61556 12143936 1% /tmp none tmpfs 12205492 61556 12143936 1% /var/lib/dhclient none tmpfs 12205492 61556 12143936 1% /var/tmp none tmpfs 12205492 61556 12143936 1% /var/lib/iscsi none tmpfs 12205492 61556 12143936 1% /var/lib/logrotate.status none tmpfs 12205492 61556 12143936 1% /var/lib/ntp /dev/mapper/HostVG-Config ext4 5021312 22060 4721136 1% /config none tmpfs 12205492 61556 12143936 1% /var/spool none tmpfs 12205492 61556 12143936 1% /var/lib/nfs none tmpfs 12205492 61556 12143936 1% /etc none tmpfs 12205492 61556 12143936 1% /var/lib/net-snmp none tmpfs 12205492 61556 12143936 1% /var/lib/dnsmasq none tmpfs 12205492 61556 12143936 1% /root/.ssh none tmpfs 12205492 61556 12143936 1% /root/.uml none tmpfs 12205492 61556 12143936 1% /var/cache/libvirt none tmpfs 12205492 61556 12143936 1% /var/lib/libvirt none tmpfs 12205492 61556 12143936 1% /var/cache/multipathd none tmpfs 12205492 61556 12143936 1% /mnt none tmpfs 12205492 61556 12143936 1% /boot none tmpfs 12205492 61556 12143936 1% /boot-kdump none tmpfs 12205492 61556 12143936 1% /var/lib/yum none tmpfs 12205492 61556 12143936 1% /var/cache/yum none tmpfs 12205492 61556 12143936 1% /usr/share/snmp/mibs none tmpfs 12205492 61556 12143936 1% /var/lib/lldpad none tmpfs 12205492 61556 12143936 1% /usr/share/snmp/mibs none tmpfs 12205492 61556 12143936 1% /var/lib/stateless/writable/usr/share/snmp/mibs none tmpfs 12205492 61556 12143936 1% /var/lib/lldpad none tmpfs 12205492 61556 12143936 1% /var/lib/stateless/writable/var/lib/lldpad none tmpfs 12205492 61556 12143936 1% /var/cache/rhn none tmpfs 12205492 61556 12143936 1% /var/db none tmpfs 12205492 61556 12143936 1% /usr/libexec/vdsm/hooks none tmpfs 12205492 61556 12143936 1% /var/lib/vdsm none tmpfs 12205492 61556 12143936 1% /rhev/data-center none tmpfs 12205492 61556 12143936 1% /var/lib/dhclient none tmpfs 12205492 61556 12143936 1% /var/lib/stateless/writable/var/lib/dhclient none tmpfs 12205492 61556 12143936 1% /tmp/early-logs /dev/mapper/HostVG-Logging ext4 1998672 24236 1853196 2% /var/log /dev/mapper/HostVG-Data ext4 7351784 33412 6921880 1% /data tmpfs tmpfs 12205492 25496 12179996 1% /dev/.initramfs /dev/mapper/29cff6c8--2135--438b--aa76--86be1acba1a6-master ext3 999320 1328 945564 1% /rhev/data-center/mnt/blockSD/29cff6c8-2135-438b-aa76-86be1acba1a6/master > shaochen , can you explain why this is a bug? This is how RHEV/oVirt display > devices. > Why is this wrong? How does it effect the usage of the system? I think the boot lun should be filter, otherwise if user attach the boot lun as iscsi storage, all the data on boot lun will be wiped. This is not recommended, Thanks!
Moving this to vdsm for now, as vdsm is reporting the disks to Engine.
Possible a dup or subset of bug 1212090.
(In reply to Allon Mureinik from comment #5) > Possible a dup or subset of bug 1212090. It is a duplicate of 1212090.
*** This bug has been marked as a duplicate of bug 1212090 ***