Bug 1524500

Summary: [downstream clone - 4.1.9] Guest LVs created on raw volumes are auto activated on the hypervisor with FC storage (lvm filter?)
Product: Red Hat Enterprise Virtualization Manager Reporter: Tal Nisan <tnisan>
Component: vdsmAssignee: Nir Soffer <nsoffer>
Status: CLOSED ERRATA QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: high Docs Contact:
Priority: high    
Version: 4.1.0CC: amureini, bazulay, cshao, dfediuck, dguo, gveitmic, huzhao, jcoscia, jiawu, lsurette, mjankula, nsoffer, qiyuan, ratamir, rbarry, rhev-integ, rik.theys, sbonazzo, srevivo, tnisan, trichard, weiwang, yaniwang, ycui, ykaul, ylavi, yzhao
Target Milestone: ovirt-4.1.9Keywords: ZStream
Target Release: ---Flags: lsvaty: testing_plan_complete-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Currently, LVM scans and activates raw volumes during boot. Then it scans and activates guest logical volumes created inside a guest on top of the raw volumes. It also scans and activates guest logical volumes inside LUNs which are not part of a Red Hat Virtualization storage domain. As a result, there may be thousands of active logical volumes on a host, which should not be active. This leads to very slow boot time and may lead to data corruption later if a logical volume active on the host was extended on another host. To avoid this, you can configure an LVM filter using the "vdsm-tool config-lvm-filter" command. The LVM filter prevents scanning and activation of logical volumes not required by the host, which improves boot time.
Story Points: ---
Clone Of: 1523152 Environment:
Last Closed: 2018-01-24 14:44:27 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1449968, 1523152    
Bug Blocks:    

Comment 2 Nir Soffer 2018-01-10 14:59:19 UTC
The patches were merged on Dec 17, not sure why gerrit did not update the bug.

Comment 4 Kevin Alon Goldblatt 2018-01-23 14:23:01 UTC
Verified with the following code:
----------------------------------------
ovirt-engine-4.1.9.1-0.1.el7.noarch
vdsm-4.19.45-1.el7ev.x86_64


Verified with the following scenario:
----------------------------------------
Steps to Reproduce:
1.install host and add it to manager
2.add fc storage to datacenter
3.create vm on raw device with lvm on top of it
4.shutdown the vm, put host into maintenance
6, reboot the host


Before filter output:
---------------------------------------
lvs -o vg_name,lv_name,devices,attr
  VG                                   LV                                   Devices                                                                           Attr      
  7ae6a9e0-f021-4e9f-85de-fc14082516ae 0643ff20-a35d-4a5c-811e-a712d47d6de8 /dev/mapper/3514f0c5a51601382(39)                                                 -wi-ao----
  7ae6a9e0-f021-4e9f-85de-fc14082516ae ids                                  /dev/mapper/3514f0c5a51601382(29)                                                 -wi-a-----
  7ae6a9e0-f021-4e9f-85de-fc14082516ae inbox                                /dev/mapper/3514f0c5a51601382(30)                                                 -wi-a-----
  7ae6a9e0-f021-4e9f-85de-fc14082516ae metadata                             /dev/mapper/3514f0c5a51601382(0)                                                  -wi-a-----
  7ae6a9e0-f021-4e9f-85de-fc14082516ae xleases                              /dev/mapper/3514f0c5a51601382(5)                                                  -wi-a-----
  VolGroup01                           root                                 /dev/sde3(0)                                                                      -wi-ao----
  vg1                                  lvol0                                /dev/7ae6a9e0-f021-4e9f-85de-fc14082516ae/0643ff20-a35d-4a5c-811e-a712d47d6de8(0) -wi-a-----
  vg2                                  lvol0                                /dev/mapper/3514f0c5a51601381(0)                                                  -wi-------


Output after adding output:
------------------------------------
VolGroup01     root        /dev/sde3(0)   -wi-ao----


Moving to VERIFIED

Comment 7 errata-xmlrpc 2018-01-24 14:44:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0136

Comment 8 Franta Kust 2019-05-16 12:54:47 UTC
BZ<2>Jira re-sync

Comment 9 Daniel Gur 2019-08-28 13:13:55 UTC
sync2jira

Comment 10 Daniel Gur 2019-08-28 13:18:09 UTC
sync2jira