Bug 983599 - Change lvm filter to access RHEV PVs only by full path /dev/mapper/wwid
Summary: Change lvm filter to access RHEV PVs only by full path /dev/mapper/wwid
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.2.2
Assignee: Yeela Kaplan
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On: 981055
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-11 14:20 UTC by Idith Tal-Kohen
Modified: 2018-12-03 19:19 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The LVM filter has been updated to only access physical volumes by full /dev/mapper paths in order to improve performance. This replaces the previous behavior of scanning all devices including logical volumes on physical volumes.
Clone Of: 981055
Environment:
Last Closed: 2013-08-13 16:17:47 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:1155 0 normal SHIPPED_LIVE Moderate: rhev 3.2.2 - vdsm security and bug fix update 2013-08-21 21:07:13 UTC
oVirt gerrit 16730 0 None None None Never

Comment 1 Ayal Baron 2013-07-17 08:59:40 UTC
Yeela, please backport the fix to 3.2.2

Comment 3 Aharon Canan 2013-07-29 15:21:09 UTC

reproduce using sf19

steps to reproduce - 
====================
1. create setup with 2 hosts and 2 storagedomain (different storage)
2. block one of the hosts from accessing one of the domains.
3. check that filter doesn't show the dead path (in vdsm.log look for "filter")

from logs - 
===========
[root@camel-vdsc mapper]# tail -f /var/log/vdsm/vdsm.log |grep filter
storageRefresh::DEBUG::2013-07-29 18:14:56,691::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)
storageRefresh::DEBUG::2013-07-29 18:15:12,604::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None)
storageRefresh::DEBUG::2013-07-29 18:15:12,797::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)
Thread-19::DEBUG::2013-07-29 18:15:17,971::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free d68f2c43-3c1b-43e1-8114-420867e05d5f' (cwd None)



luns status - 
=============
[root@camel-vdsc mapper]# multipath -ll
3600601601282300090df67d390f7e211 dm-11 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:2 sde 8:64 failed faulty running
3600601601282300094df67d390f7e211 dm-4 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:4 sdg 8:96 failed faulty running
360060160128230008edf67d390f7e211 dm-10 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:1 sdd 8:48 failed faulty running
3600601601282300092df67d390f7e211 dm-13 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:3 sdf 8:80 failed faulty running
360060160128230008cdf67d390f7e211 dm-12 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:0 sdc 8:32 failed faulty running
1acanan-011370441 dm-2 IET,VIRTUAL-DISK
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 7:0:0:1 sdb 8:16 active ready  running

Comment 4 Ayal Baron 2013-07-30 11:58:22 UTC
(In reply to Aharon Canan from comment #3)
> 
> reproduce using sf19
> 
> steps to reproduce - 
> ====================
> 1. create setup with 2 hosts and 2 storagedomain (different storage)
> 2. block one of the hosts from accessing one of the domains.
> 3. check that filter doesn't show the dead path (in vdsm.log look for
> "filter")

This bug is not about filtering out dead paths, only about changing the filter to always point to /dev/mapper paths (as the title states).

The log excerpt above shows that this is working well.

Comment 6 errata-xmlrpc 2013-08-13 16:17:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1155.html


Note You need to log in before you can comment on or make changes to this bug.