Bug 1212090 - Local disk should not be listed in Fibre Channel storage domain on rhevh 7.1 host.
Summary: Local disk should not be listed in Fibre Channel storage domain on rhevh 7.1 ...
Keywords:
Status: CLOSED DUPLICATE of bug 1033891
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.5.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.0.0-alpha
: 4.0.0
Assignee: Fred Rolland
QA Contact: Aharon Canan
URL:
Whiteboard: storage
: 1212349 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-15 14:19 UTC by Ying Cui
Modified: 2016-02-10 18:05 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-29 09:42:11 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
frolland: needinfo-


Attachments (Terms of Use)
varlog (189.58 KB, application/x-bzip)
2015-04-15 14:22 UTC, Ying Cui
no flags Details
FC_domain_screeshot (37.57 KB, image/png)
2015-04-15 14:22 UTC, Ying Cui
no flags Details
engine.log (815.04 KB, text/plain)
2015-04-15 14:26 UTC, Ying Cui
no flags Details

Description Ying Cui 2015-04-15 14:19:09 UTC
Description of problem:
Currently local disk is listed in FC storage domain using RHEVH 7.1 host, but it shouldn't be listed here. 

Version-Release number of selected component (if applicable):
# rpm -q ovirt-node vdsm kernel
ovirt-node-3.2.2-3.el7.noarch
vdsm-4.16.13-1.el7ev.x86_64
kernel-3.10.0-229.1.2.el7.x86_64
# cat /etc/system-release
Red Hat Enterprise Virtualization Hypervisor 7.1 (20150402.0.el7ev)


How reproducible:
100%

Steps to Reproduce:
1. Machine with local disk and FC HBA connect to Fibre Channel storage.
2. Installed rhevh and boot rhevh on FC lun(360050763008084e6e00000000000004c)
3. Add rhevh via rhevm portal.
4. RHEVM admin portal: Navigate to 'Storage' --> 'New Domain'-> Domain Function / Storage Type is 'Date Fibre Channel'

Actual results:
1. local disk is listed on FC storage. 

Expected results:
1. local disk should not be listed on Fibre Channel domain.

Additional info:
# multipath -ll
Apr 15 04:04:48 | multipath.conf +5, invalid keyword: getuid_callout
Apr 15 04:04:48 | multipath.conf +18, invalid keyword: getuid_callout
Apr 15 04:04:48 | multipath.conf +37, invalid keyword: getuid_callout
360050763008084e6e00000000000004e dm-12 IBM     ,2145            
size=40G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 4:0:0:1 sdc 8:32  active ready running
| `- 5:0:1:1 sdi 8:128 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 4:0:1:1 sde 8:64  active ready running
  `- 5:0:0:1 sdg 8:96  active ready running
3600508b1001c94646ba0271afaaa249e dm-1 HP      ,LOGICAL VOLUME  
size=559G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 3:0:0:0 sda 8:0   active ready running
360050763008084e6e00000000000004c dm-0 IBM     ,2145            
size=30G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 4:0:0:0 sdb 8:16  active ready running
| `- 5:0:1:0 sdh 8:112 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 4:0:1:0 sdd 8:48  active ready running
  `- 5:0:0:0 sdf 8:80  active ready running
# lsblk --nodeps -o name,serial
NAME  SERIAL
sda   600508b1001c94646ba0271afaaa249e
sdb   60050763008084e6e00000000000004c
sdc   60050763008084e6e00000000000004e
sdd   60050763008084e6e00000000000004c
sde   60050763008084e6e00000000000004e
sdf   60050763008084e6e00000000000004c
sdg   60050763008084e6e00000000000004e
sdh   60050763008084e6e00000000000004c
sdi   60050763008084e6e00000000000004e
sr0   KWUE4PD5917
loop0 
loop1 
loop2 

# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz0 root=live:LABEL=Root ro rootfstype=auto rootflags=ro rd.live.image rd.live.check crashkernel=256M elevator=deadline quiet max_loop=256 rhgb rd.luks=0 rd.md=0 rd.dm=0 mpath.wwid=360050763008084e6e00000000000004c

# lsblk -o name,serial
NAME                                   SERIAL
sda                                    600508b1001c94646ba0271afaaa249e
└─3600508b1001c94646ba0271afaaa249e    
sdb                                    60050763008084e6e00000000000004c
└─360050763008084e6e00000000000004c    
  ├─360050763008084e6e00000000000004c1 
  ├─360050763008084e6e00000000000004c2 
  ├─360050763008084e6e00000000000004c3 
  └─360050763008084e6e00000000000004c4 
    ├─HostVG-Swap                      
    ├─HostVG-Config                    
    ├─HostVG-Logging                   
    └─HostVG-Data                      
sdc                                    60050763008084e6e00000000000004e
└─360050763008084e6e00000000000004e    
sdd                                    60050763008084e6e00000000000004c
└─360050763008084e6e00000000000004c    
  ├─360050763008084e6e00000000000004c1 
  ├─360050763008084e6e00000000000004c2 
  ├─360050763008084e6e00000000000004c3 
  └─360050763008084e6e00000000000004c4 
    ├─HostVG-Swap                      
    ├─HostVG-Config                    
    ├─HostVG-Logging                   
    └─HostVG-Data                      
sde                                    60050763008084e6e00000000000004e
└─360050763008084e6e00000000000004e    
sdf                                    60050763008084e6e00000000000004c
└─360050763008084e6e00000000000004c    
  ├─360050763008084e6e00000000000004c1 
  ├─360050763008084e6e00000000000004c2 
  ├─360050763008084e6e00000000000004c3 
  └─360050763008084e6e00000000000004c4 
    ├─HostVG-Swap                      
    ├─HostVG-Config                    
    ├─HostVG-Logging                   
    └─HostVG-Data                      
sdg                                    60050763008084e6e00000000000004e
└─360050763008084e6e00000000000004e    
sdh                                    60050763008084e6e00000000000004c
└─360050763008084e6e00000000000004c    
  ├─360050763008084e6e00000000000004c1 
  ├─360050763008084e6e00000000000004c2 
  ├─360050763008084e6e00000000000004c3 
  └─360050763008084e6e00000000000004c4 
    ├─HostVG-Swap                      
    ├─HostVG-Config                    
    ├─HostVG-Logging                   
    └─HostVG-Data                      
sdi                                    60050763008084e6e00000000000004e
└─360050763008084e6e00000000000004e    
sr0                                    KWUE4PD5917
loop0                                  
loop1                                  
├─live-rw                              
└─live-base                            
loop2                                  
└─live-rw                              

# blkid -L Root
/dev/mapper/360050763008084e6e00000000000004c3

# vdsClient -s 0 getDeviceList
[{'GUID': '360050763008084e6e00000000000004c',
  'capacity': '32212254720',
  'devtype': 'FCP',
  'fwrev': '0000',
  'logicalblocksize': '512',
  'pathlist': [],
  'pathstatus': [{'lun': '0',
                  'physdev': 'sdb',
                  'state': 'active',
                  'type': 'FCP'},
                 {'lun': '0',
                  'physdev': 'sdd',
                  'state': 'active',
                  'type': 'FCP'},
                 {'lun': '0',
                  'physdev': 'sdf',
                  'state': 'active',
                  'type': 'FCP'},
                 {'lun': '0',
                  'physdev': 'sdh',
                  'state': 'active',
                  'type': 'FCP'}],
  'physicalblocksize': '512',
  'productID': '2145',
  'pvUUID': '',
  'serial': 'SIBM_2145_00c0202139b8XX00',
  'status': 'used',
  'vendorID': 'IBM',
  'vgUUID': ''},
 {'GUID': '3600508b1001c94646ba0271afaaa249e',
  'capacity': '600093712384',
  'devtype': 'FCP',
  'fwrev': '6.00',
  'logicalblocksize': '512',
  'pathlist': [],
  'pathstatus': [{'lun': '0',
                  'physdev': 'sda',
                  'state': 'active',
                  'type': 'FCP'}],
  'physicalblocksize': '512',
  'productID': 'LOGICAL VOLUME',
  'pvUUID': '',
  'serial': 'SHP_LOGICAL_VOLUME_0014380327E16E0',
  'status': 'free',
  'vendorID': 'HP',
  'vgUUID': ''},
 {'GUID': '360050763008084e6e00000000000004e',
  'capacity': '42949672960',
  'devtype': 'FCP',
  'fwrev': '0000',
  'logicalblocksize': '512',
  'pathlist': [],
  'pathstatus': [{'lun': '1',
                  'physdev': 'sdc',
                  'state': 'active',
                  'type': 'FCP'},
                 {'lun': '1',
                  'physdev': 'sde',
                  'state': 'active',
                  'type': 'FCP'},
                 {'lun': '1',
                  'physdev': 'sdg',
                  'state': 'active',
                  'type': 'FCP'},
                 {'lun': '1',
                  'physdev': 'sdi',
                  'state': 'active',
                  'type': 'FCP'}],
  'physicalblocksize': '512',
  'productID': '2145',
  'pvUUID': '',
  'serial': 'SIBM_2145_00c0202139b8XX00',
  'status': 'free',
  'vendorID': 'IBM',
  'vgUUID': ''}]

Comment 1 Ying Cui 2015-04-15 14:22:12 UTC
Created attachment 1014801 [details]
varlog

Comment 2 Ying Cui 2015-04-15 14:22:55 UTC
Created attachment 1014803 [details]
FC_domain_screeshot

Comment 3 Ying Cui 2015-04-15 14:26:52 UTC
Created attachment 1014806 [details]
engine.log

Comment 4 Ying Cui 2015-04-15 14:29:44 UTC
Need to fix this bug in 3.5.z, earlier is better.

Comment 5 Fabian Deutsch 2015-04-20 12:14:39 UTC
This rather looks like a vdsm issue, because vdsm is reporting it to Engine, moving it there.

Comment 6 Allon Mureinik 2015-04-20 12:18:28 UTC
If it's multipathed, I don't see what we can do about it (unless node's installer tags it somehow?).
Nir - any insight?

Comment 7 Nir Soffer 2015-04-21 00:30:19 UTC
(In reply to Allon Mureinik from comment #6)
> If it's multipathed, I don't see what we can do about it (unless node's
> installer tags it somehow?).
> Nir - any insight?

We can probably detect the boot disk and filter it out.

Ying, can you show the output of these commands?

findmnt /
realpath realpath /dev/mapper/360050763008084e6e00000000000004c

Comment 8 Allon Mureinik 2015-04-21 08:55:35 UTC
*** Bug 1212349 has been marked as a duplicate of this bug. ***

Comment 10 Nir Soffer 2015-04-30 07:10:22 UTC
Thanks Ying, I looked at the machine you provided.

When you select the device and try to create a storage domain using 
it, or add it to existing storage domain, you get a warning that this device
is used - right?

If you proceed and ignore the warning, does creating the storage domain 
work, destroying your boot lun?

Comment 11 Nir Soffer 2015-04-30 07:15:28 UTC
Workaround:

Add the boot luns wwids to multipath.conf, so they appear using
user friendly names, easy to locate in the engine ui:

multipaths {
  multipath {
    wwid   <your disk UUID get from above command>
    alias    BOOT
  }
}

Comment 12 Ying Cui 2015-04-30 07:41:59 UTC
(In reply to Nir Soffer from comment #10)
> Thanks Ying, I looked at the machine you provided.
> 
> When you select the device and try to create a storage domain using 
> it, or add it to existing storage domain, you get a warning that this device
> is used - right?

Nir, if you mean new storage domain with this local disk, I did not get any warning messages, the local disk can be created as FC storage domain and active. 


> 
> If you proceed and ignore the warning, does creating the storage domain 
> work, destroying your boot lun?

No warning messages, so create the local disk as FC storage domain works.

Here this case, RHEV-H boot lun set on FC LAN(360050763008084e6e00000000000004c), after local disk(3600508b1001c94646ba0271afaaa249e) connect to FC storage domain, reboot rhevh successful. After rhevh start, the local storage domain connect and active.

Comment 13 Nir Soffer 2015-04-30 20:04:00 UTC
(In reply to Ying Cui from comment #12)
> > If you proceed and ignore the warning, does creating the storage domain 
> > work, destroying your boot lun?
> 
> No warning messages, so create the local disk as FC storage domain works.

This is very strange - the disk is reported as "used" by vdsm, and engine
warn you about selecting this disk for a storage domain. The warning must
be displayed when you click "Ok".

Tal, can you look into the engine side of this?

Comment 14 Sandro Bonazzola 2015-10-26 12:37:01 UTC
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015.
Please review this bug and if not a blocker, please postpone to a later release.
All bugs not postponed on GA release will be automatically re-targeted to

- 3.6.1 if severity >= high
- 4.0 if severity < high

Comment 15 Yaniv Lavi 2015-10-29 09:42:11 UTC

*** This bug has been marked as a duplicate of bug 1033891 ***


Note You need to log in before you can comment on or make changes to this bug.