Bug 983599 - Change lvm filter to access RHEV PVs only by full path /dev/mapper/wwid
Change lvm filter to access RHEV PVs only by full path /dev/mapper/wwid
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.2.0
Unspecified Unspecified
high Severity high
: ---
: 3.2.2
Assigned To: Yeela Kaplan
Aharon Canan
storage
: ZStream
Depends On: 981055
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-11 10:20 EDT by Idith Tal-Kohen
Modified: 2016-02-10 13:16 EST (History)
16 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The LVM filter has been updated to only access physical volumes by full /dev/mapper paths in order to improve performance. This replaces the previous behavior of scanning all devices including logical volumes on physical volumes.
Story Points: ---
Clone Of: 981055
Environment:
Last Closed: 2013-08-13 12:17:47 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 16730 None None None Never

  None (edit)
Comment 1 Ayal Baron 2013-07-17 04:59:40 EDT
Yeela, please backport the fix to 3.2.2
Comment 3 Aharon Canan 2013-07-29 11:21:09 EDT

reproduce using sf19

steps to reproduce - 
====================
1. create setup with 2 hosts and 2 storagedomain (different storage)
2. block one of the hosts from accessing one of the domains.
3. check that filter doesn't show the dead path (in vdsm.log look for "filter")

from logs - 
===========
[root@camel-vdsc mapper]# tail -f /var/log/vdsm/vdsm.log |grep filter
storageRefresh::DEBUG::2013-07-29 18:14:56,691::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)
storageRefresh::DEBUG::2013-07-29 18:15:12,604::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None)
storageRefresh::DEBUG::2013-07-29 18:15:12,797::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)
Thread-19::DEBUG::2013-07-29 18:15:17,971::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a|/dev/mapper/1acanan-011370441|/dev/mapper/360060160128230008cdf67d390f7e211|/dev/mapper/360060160128230008edf67d390f7e211|/dev/mapper/3600601601282300090df67d390f7e211|/dev/mapper/3600601601282300092df67d390f7e211|/dev/mapper/3600601601282300094df67d390f7e211|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free d68f2c43-3c1b-43e1-8114-420867e05d5f' (cwd None)



luns status - 
=============
[root@camel-vdsc mapper]# multipath -ll
3600601601282300090df67d390f7e211 dm-11 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:2 sde 8:64 failed faulty running
3600601601282300094df67d390f7e211 dm-4 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:4 sdg 8:96 failed faulty running
360060160128230008edf67d390f7e211 dm-10 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:1 sdd 8:48 failed faulty running
3600601601282300092df67d390f7e211 dm-13 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:3 sdf 8:80 failed faulty running
360060160128230008cdf67d390f7e211 dm-12 DGC,VRAID
size=100G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 9:0:0:0 sdc 8:32 failed faulty running
1acanan-011370441 dm-2 IET,VIRTUAL-DISK
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  `- 7:0:0:1 sdb 8:16 active ready  running
Comment 4 Ayal Baron 2013-07-30 07:58:22 EDT
(In reply to Aharon Canan from comment #3)
> 
> reproduce using sf19
> 
> steps to reproduce - 
> ====================
> 1. create setup with 2 hosts and 2 storagedomain (different storage)
> 2. block one of the hosts from accessing one of the domains.
> 3. check that filter doesn't show the dead path (in vdsm.log look for
> "filter")

This bug is not about filtering out dead paths, only about changing the filter to always point to /dev/mapper paths (as the title states).

The log excerpt above shows that this is working well.
Comment 6 errata-xmlrpc 2013-08-13 12:17:47 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1155.html

Note You need to log in before you can comment on or make changes to this bug.