Description of problem: Currently local disk is listed in FC storage domain using RHEVH 7.1 host, but it shouldn't be listed here. Version-Release number of selected component (if applicable): # rpm -q ovirt-node vdsm kernel ovirt-node-3.2.2-3.el7.noarch vdsm-4.16.13-1.el7ev.x86_64 kernel-3.10.0-229.1.2.el7.x86_64 # cat /etc/system-release Red Hat Enterprise Virtualization Hypervisor 7.1 (20150402.0.el7ev) How reproducible: 100% Steps to Reproduce: 1. Machine with local disk and FC HBA connect to Fibre Channel storage. 2. Installed rhevh and boot rhevh on FC lun(360050763008084e6e00000000000004c) 3. Add rhevh via rhevm portal. 4. RHEVM admin portal: Navigate to 'Storage' --> 'New Domain'-> Domain Function / Storage Type is 'Date Fibre Channel' Actual results: 1. local disk is listed on FC storage. Expected results: 1. local disk should not be listed on Fibre Channel domain. Additional info: # multipath -ll Apr 15 04:04:48 | multipath.conf +5, invalid keyword: getuid_callout Apr 15 04:04:48 | multipath.conf +18, invalid keyword: getuid_callout Apr 15 04:04:48 | multipath.conf +37, invalid keyword: getuid_callout 360050763008084e6e00000000000004e dm-12 IBM ,2145 size=40G features='0' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 4:0:0:1 sdc 8:32 active ready running | `- 5:0:1:1 sdi 8:128 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 4:0:1:1 sde 8:64 active ready running `- 5:0:0:1 sdg 8:96 active ready running 3600508b1001c94646ba0271afaaa249e dm-1 HP ,LOGICAL VOLUME size=559G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active `- 3:0:0:0 sda 8:0 active ready running 360050763008084e6e00000000000004c dm-0 IBM ,2145 size=30G features='0' hwhandler='0' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 4:0:0:0 sdb 8:16 active ready running | `- 5:0:1:0 sdh 8:112 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 4:0:1:0 sdd 8:48 active ready running `- 5:0:0:0 sdf 8:80 active ready running # lsblk --nodeps -o name,serial NAME SERIAL sda 600508b1001c94646ba0271afaaa249e sdb 60050763008084e6e00000000000004c sdc 60050763008084e6e00000000000004e sdd 60050763008084e6e00000000000004c sde 60050763008084e6e00000000000004e sdf 60050763008084e6e00000000000004c sdg 60050763008084e6e00000000000004e sdh 60050763008084e6e00000000000004c sdi 60050763008084e6e00000000000004e sr0 KWUE4PD5917 loop0 loop1 loop2 # cat /proc/cmdline BOOT_IMAGE=/vmlinuz0 root=live:LABEL=Root ro rootfstype=auto rootflags=ro rd.live.image rd.live.check crashkernel=256M elevator=deadline quiet max_loop=256 rhgb rd.luks=0 rd.md=0 rd.dm=0 mpath.wwid=360050763008084e6e00000000000004c # lsblk -o name,serial NAME SERIAL sda 600508b1001c94646ba0271afaaa249e └─3600508b1001c94646ba0271afaaa249e sdb 60050763008084e6e00000000000004c └─360050763008084e6e00000000000004c ├─360050763008084e6e00000000000004c1 ├─360050763008084e6e00000000000004c2 ├─360050763008084e6e00000000000004c3 └─360050763008084e6e00000000000004c4 ├─HostVG-Swap ├─HostVG-Config ├─HostVG-Logging └─HostVG-Data sdc 60050763008084e6e00000000000004e └─360050763008084e6e00000000000004e sdd 60050763008084e6e00000000000004c └─360050763008084e6e00000000000004c ├─360050763008084e6e00000000000004c1 ├─360050763008084e6e00000000000004c2 ├─360050763008084e6e00000000000004c3 └─360050763008084e6e00000000000004c4 ├─HostVG-Swap ├─HostVG-Config ├─HostVG-Logging └─HostVG-Data sde 60050763008084e6e00000000000004e └─360050763008084e6e00000000000004e sdf 60050763008084e6e00000000000004c └─360050763008084e6e00000000000004c ├─360050763008084e6e00000000000004c1 ├─360050763008084e6e00000000000004c2 ├─360050763008084e6e00000000000004c3 └─360050763008084e6e00000000000004c4 ├─HostVG-Swap ├─HostVG-Config ├─HostVG-Logging └─HostVG-Data sdg 60050763008084e6e00000000000004e └─360050763008084e6e00000000000004e sdh 60050763008084e6e00000000000004c └─360050763008084e6e00000000000004c ├─360050763008084e6e00000000000004c1 ├─360050763008084e6e00000000000004c2 ├─360050763008084e6e00000000000004c3 └─360050763008084e6e00000000000004c4 ├─HostVG-Swap ├─HostVG-Config ├─HostVG-Logging └─HostVG-Data sdi 60050763008084e6e00000000000004e └─360050763008084e6e00000000000004e sr0 KWUE4PD5917 loop0 loop1 ├─live-rw └─live-base loop2 └─live-rw # blkid -L Root /dev/mapper/360050763008084e6e00000000000004c3 # vdsClient -s 0 getDeviceList [{'GUID': '360050763008084e6e00000000000004c', 'capacity': '32212254720', 'devtype': 'FCP', 'fwrev': '0000', 'logicalblocksize': '512', 'pathlist': [], 'pathstatus': [{'lun': '0', 'physdev': 'sdb', 'state': 'active', 'type': 'FCP'}, {'lun': '0', 'physdev': 'sdd', 'state': 'active', 'type': 'FCP'}, {'lun': '0', 'physdev': 'sdf', 'state': 'active', 'type': 'FCP'}, {'lun': '0', 'physdev': 'sdh', 'state': 'active', 'type': 'FCP'}], 'physicalblocksize': '512', 'productID': '2145', 'pvUUID': '', 'serial': 'SIBM_2145_00c0202139b8XX00', 'status': 'used', 'vendorID': 'IBM', 'vgUUID': ''}, {'GUID': '3600508b1001c94646ba0271afaaa249e', 'capacity': '600093712384', 'devtype': 'FCP', 'fwrev': '6.00', 'logicalblocksize': '512', 'pathlist': [], 'pathstatus': [{'lun': '0', 'physdev': 'sda', 'state': 'active', 'type': 'FCP'}], 'physicalblocksize': '512', 'productID': 'LOGICAL VOLUME', 'pvUUID': '', 'serial': 'SHP_LOGICAL_VOLUME_0014380327E16E0', 'status': 'free', 'vendorID': 'HP', 'vgUUID': ''}, {'GUID': '360050763008084e6e00000000000004e', 'capacity': '42949672960', 'devtype': 'FCP', 'fwrev': '0000', 'logicalblocksize': '512', 'pathlist': [], 'pathstatus': [{'lun': '1', 'physdev': 'sdc', 'state': 'active', 'type': 'FCP'}, {'lun': '1', 'physdev': 'sde', 'state': 'active', 'type': 'FCP'}, {'lun': '1', 'physdev': 'sdg', 'state': 'active', 'type': 'FCP'}, {'lun': '1', 'physdev': 'sdi', 'state': 'active', 'type': 'FCP'}], 'physicalblocksize': '512', 'productID': '2145', 'pvUUID': '', 'serial': 'SIBM_2145_00c0202139b8XX00', 'status': 'free', 'vendorID': 'IBM', 'vgUUID': ''}]
Created attachment 1014801 [details] varlog
Created attachment 1014803 [details] FC_domain_screeshot
Created attachment 1014806 [details] engine.log
Need to fix this bug in 3.5.z, earlier is better.
This rather looks like a vdsm issue, because vdsm is reporting it to Engine, moving it there.
If it's multipathed, I don't see what we can do about it (unless node's installer tags it somehow?). Nir - any insight?
(In reply to Allon Mureinik from comment #6) > If it's multipathed, I don't see what we can do about it (unless node's > installer tags it somehow?). > Nir - any insight? We can probably detect the boot disk and filter it out. Ying, can you show the output of these commands? findmnt / realpath realpath /dev/mapper/360050763008084e6e00000000000004c
*** Bug 1212349 has been marked as a duplicate of this bug. ***
Thanks Ying, I looked at the machine you provided. When you select the device and try to create a storage domain using it, or add it to existing storage domain, you get a warning that this device is used - right? If you proceed and ignore the warning, does creating the storage domain work, destroying your boot lun?
Workaround: Add the boot luns wwids to multipath.conf, so they appear using user friendly names, easy to locate in the engine ui: multipaths { multipath { wwid <your disk UUID get from above command> alias BOOT } }
(In reply to Nir Soffer from comment #10) > Thanks Ying, I looked at the machine you provided. > > When you select the device and try to create a storage domain using > it, or add it to existing storage domain, you get a warning that this device > is used - right? Nir, if you mean new storage domain with this local disk, I did not get any warning messages, the local disk can be created as FC storage domain and active. > > If you proceed and ignore the warning, does creating the storage domain > work, destroying your boot lun? No warning messages, so create the local disk as FC storage domain works. Here this case, RHEV-H boot lun set on FC LAN(360050763008084e6e00000000000004c), after local disk(3600508b1001c94646ba0271afaaa249e) connect to FC storage domain, reboot rhevh successful. After rhevh start, the local storage domain connect and active.
(In reply to Ying Cui from comment #12) > > If you proceed and ignore the warning, does creating the storage domain > > work, destroying your boot lun? > > No warning messages, so create the local disk as FC storage domain works. This is very strange - the disk is reported as "used" by vdsm, and engine warn you about selecting this disk for a storage domain. The warning must be displayed when you click "Ok". Tal, can you look into the engine side of this?
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high
*** This bug has been marked as a duplicate of bug 1033891 ***