Bug 1130527 - Prevent guest vs. hypervisor VG/LV naming collisions (lvm filter?)
Summary: Prevent guest vs. hypervisor VG/LV naming collisions (lvm filter?)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.6.0
Hardware: All
OS: Linux
medium
high
Target Milestone: ovirt-4.2.1
: ---
Assignee: Nir Soffer
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On: 1374545
Blocks: 1450114
TreeView+ depends on / blocked
 
Reported: 2014-08-15 13:21 UTC by akotov
Modified: 2021-06-10 10:48 UTC (History)
20 users (show)

Fixed In Version: vdsm-4.20.9
Doc Type: Bug Fix
Doc Text:
LVM scans and activates raw volumes during boot. Then it scans and activates guest logical volumes created inside a guest on top of the raw volumes. It also scans and activates guest logical volumes inside LUNs which are not part of a Red Hat Virtualization storage domain. As a result, it may find logical volumes with the same volume name or volume group name as groups or volumes on the host, causing errors. To avoid this, you can configure an LVM filter using the "vdsm-tool config-lvm-filter" command. The LVM filter prevents scanning and activation of logical volumes not required by the host, thereby avoiding naming collisions.
Clone Of:
Environment:
Last Closed: 2018-05-15 17:49:33 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
sherold: Triaged+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1489 0 None None None 2018-05-15 17:51:04 UTC
oVirt gerrit 85126 0 None MERGED lvmfilter: Filter out the master LV on the SPM 2020-10-04 14:43:45 UTC

Description akotov 2014-08-15 13:21:27 UTC
Description of problem:

RFE is to make RHEV better handle situation, when guest,to which direct LUN was presented, can greate volume group with the same name as internal volume group for RHEV-Hypervisor (HostVG). It can lead to the environment being non-operational after hypervisor reboot.

Proposition is to filter out of lvm.conf only LUNs, directly presented to the guests at the time such LUN is presented.

Comment 9 Yaniv Lavi 2016-12-14 16:19:44 UTC
This bug had requires_doc_text flag, yet no documentation text was provided. Please add the documentation text and only then set this flag.

Comment 12 Nir Soffer 2017-02-27 15:07:19 UTC
This requires applying lvm filter whitelisting the host devices, so LVM
does not scan direct luns (or any other lun which is not needed by the hypervisor).

Comment 13 Nir Soffer 2017-05-12 15:23:45 UTC
(In reply to akotov from comment #0)
> RFE is to make RHEV better handle situation, when guest,to which direct LUN
> was presented, can greate volume group with the same name as internal volume
> group for RHEV-Hypervisor (HostVG). It can lead to the environment being
> non-operational after hypervisor reboot.

Can you add more specific details how to reproduce this issue, and how a vg/lv
with the same name can be created by the guest?

RHV vgs and lvs are using uuids, so I don't think we can have identical vg/lv
name in both hosts and guest.

An issue suggested in another bug was identical vg/lv uuid (internal lvm uuid),
which is possible when you clone a raw RHV disk after creating vg/lvs inside
the guest.

We would like to get specific instructions so we can reproduce this issue.

Comment 14 Nir Soffer 2017-07-02 16:42:21 UTC
I tried to reproduce this issue by simulating discovery of a new lun use on 
another system for as lvm physical volume.

Tested on:
# rpm -qa | egrep 'lvm2|multipath|kernel' | sort
device-mapper-multipath-0.4.9-99.el7_3.1.x86_64
device-mapper-multipath-libs-0.4.9-99.el7_3.1.x86_64
kernel-3.10.0-327.el7.x86_64
kernel-3.10.0-514.10.2.el7.x86_64
kernel-3.10.0-514.el7.x86_64
kernel-headers-3.10.0-514.10.2.el7.x86_64
kernel-tools-3.10.0-514.10.2.el7.x86_64
kernel-tools-libs-3.10.0-514.10.2.el7.x86_64
lvm2-2.02.166-1.el7_3.3.x86_64
lvm2-libs-2.02.166-1.el7_3.3.x86_64

I tried this flow:

1. select a FC LUN not used by vdsm
2. create a PV, VG and 2 LVs
   pvcreate /dev/mapper/xxxyyy
   vgcreate guest-vg /dev/mapper/xxxyyy
   lvcreate --name guest-lv-1 --size 10g guest-vg
   lvcreate --name guest-lv-2 --size 10g guest-vg
3. Stop vdsm and multipathd
   (hoping that multipathd will not update /etc/multipath/wwids when stopped)
4. remove xxxyyy from /etc/multipath/wwids
5. reboot
6. check using multipath -ll if multipath could grab the LUN after boot

I could not reproduce it after 4 tries.

In the output of journalctl -b, we can see that both lvm and lvm are trying to grab
the devices, but in all case multipath could grab the device.

Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: sde: add path (uevent)
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: sde: spurious uevent, path already in pathvec
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting system-lvm2\x2dpvscan.slice.
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting LVM2 PV scan on device 8:64...
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting LVM2 PV scan on device 8:80...
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com kernel: device-mapper: multipath service-time: version 0.3.0 loaded
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: 3600a09803830355a332b47677750717a: load table [0 104857600 multipath 3 pg_init_retries 50 retain_attached_hw_handler 0 1 1 s
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: 3600a09803830355a332b47677750717a: event checker started
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: sde [8:64]: path added to devmap 3600a09803830355a332b47677750717a
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: sdf: add path (uevent)
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: sdf: spurious uevent, path already in pathvec
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: 3600a09803830355a332b476777507230: load table [0 104857600 multipath 3 pg_init_retries 50 retain_attached_hw_handler 0 1 1 s
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: 3600a09803830355a332b476777507230: event checker started
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com multipathd[911]: sdf [8:80]: path added to devmap 3600a09803830355a332b476777507230
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Started LVM2 PV scan on device 8:64.
Jul 02 19:30:07 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Started LVM2 PV scan on device 8:80.

Maybe this issue was solved in 7.3?

Or maybe there is another thing needed to reproduce this bug?

We need also to test the case when a device is discovered not during boot,
but when scanning FC hosts. Maybe timing is different in this case?

Ben, can you advice how to simulate this issue better?

Comment 15 Nir Soffer 2017-07-02 16:45:05 UTC
Rax, I will need help from QE for testing this, I will need to be able to map
a new LUN with LVM setup to a running system, and trigger a FC scan.

I will need access to a FC server for mapping a new LUN, or someone from QE
that can help with with this. I will need to map and unmap a LUN to a host
several times to reproduce this.

Comment 16 Raz Tamir 2017-07-02 17:03:25 UTC
Elad,
Can you please help Nir?

Comment 17 Raz Tamir 2017-07-02 17:03:26 UTC
Elad,
Can you please help Nir?

Comment 18 Nir Soffer 2017-07-02 17:20:06 UTC
Adding back needinfo for Ben, please see comment 14.

Comment 19 Nir Soffer 2017-07-02 21:02:48 UTC
Oops, comment 14 was pasted in the wrong bug, removing needinfos.

Comment 20 Nir Soffer 2017-07-02 21:32:26 UTC
I could reproduce creating duplicate PV, VG and LV by cloning a vm with raw disk.

I started with this setup:

- Storage domain using FC LUNs
- Vm provisioned with Fedora 25 on raw ovirt disk
- Add second raw disk (/dev/sdb)
- Create pv fro second disk (/dev/sdb)
- Extend the "fedora" vg in the guest with /dev/sdb
- Shut down the vm
- Clone the vm - this copy the disks contents to new raw disk

In the first vm we have:

# vgs -o name,pv_name,vg_uuid,pv_uuid
  VG     PV         VG UUID                                PV UUID                               
  fedora /dev/sda2  Zj1Vrb-M1y6-x23X-1Nkv-WQo3-cmg2-9ORVLh Go3BHM-7vEM-onie-2PX3-XuGp-ItvY-RByRbX
  fedora /dev/sdb   Zj1Vrb-M1y6-x23X-1Nkv-WQo3-cmg2-9ORVLh 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB

In the second vm:

# vgs -o name,pv_name,vg_uuid,pv_uuid
  VG     PV         VG UUID                                PV UUID                               
  fedora /dev/sda2  Zj1Vrb-M1y6-x23X-1Nkv-WQo3-cmg2-9ORVLh Go3BHM-7vEM-onie-2PX3-XuGp-ItvY-RByRbX
  fedora /dev/sdb   Zj1Vrb-M1y6-x23X-1Nkv-WQo3-cmg2-9ORVLh 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB


During boot we see:

Jul 02 20:52:55 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9
Jul 02 20:52:55 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/bf346def-5bce-402d-a90
Jul 02 20:52:56 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9
Jul 02 20:52:56 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/bf346def-5bce-402d-a90
Jul 02 20:52:56 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: Couldn't find device with uuid Go3BHM-7vEM-onie-2PX3-XuGp-ItvY-RByRbX.
Jul 02 20:52:56 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: Refusing activation of partial LV fedora/swap.  Use '--activationmode partial' to override.
Jul 02 20:52:56 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: Refusing activation of partial LV fedora/root.  Use '--activationmode partial' to override.
Jul 02 20:52:56 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: 0 logical volume(s) in volume group "fedora" now active

LVM tried and fail to activate the guest fedora vg, since it uses
/dev/sda2 on the guest, which is not available on the host.

And the same for the other identical vm:

Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/bf346def-5bce-402d-a90
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com postfix/postfix-script[2036]: starting the Postfix mail system
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/bf346def-5bce-402d-a90
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: Couldn't find device with uuid Go3BHM-7vEM-onie-2PX3-XuGp-ItvY-RByRbX.
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: Refusing activation of partial LV fedora/swap.  Use '--activationmode partial' to override.
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: Refusing activation of partial LV fedora/root.  Use '--activationmode partial' to override.
Jul 02 20:53:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1979]: 0 logical volume(s) in volume group "fedora" now active


After starting the identical vms:

# lvs
  WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/bf346def-5bce-402d-a905-68ecec6cc647 was already found on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9ea.
  WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9ea because device was seen first.
  WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/bf346def-5bce-402d-a905-68ecec6cc647 was already found on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9ea.
  WARNING: PV 1uNLk4-ImWR-A48E-pRIZ-5DZe-fVij-bKq8bB prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/13116c78-465e-40bc-863b-4d08d66ac9ea because of previous preference.
  Couldn't find device with uuid Go3BHM-7vEM-onie-2PX3-XuGp-ItvY-RByRbX.
  LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  13116c78-465e-40bc-863b-4d08d66ac9ea f6cb830a-cfee-410c-9d7f-f449926f3022 -wi-ao----   4.00g                                                    
  4a69d552-ade9-4a40-ae12-5aad690130fa f6cb830a-cfee-410c-9d7f-f449926f3022 -wi-ao----   8.00g                                                    
  bf346def-5bce-402d-a905-68ecec6cc647 f6cb830a-cfee-410c-9d7f-f449926f3022 -wi-ao----   4.00g                                                    
  ef3393df-03c0-443e-a220-14911a268ee5 f6cb830a-cfee-410c-9d7f-f449926f3022 -wi-ao----   8.00g                                                    
  root                                 fedora                               -wi-----p-   8.20g                                                    
  swap                                 fedora                               -wi-----p- 820.00m                                                    

Note the guest vg "fedora" - which is partial because /dev/sda2 is not 
available on the host.


Second setup:

- Provision new vm with Fedora 25 on raw disk
- Add a new raw disk (/dev/sdb)
- Crate pv from /dev/sdb
- Create new "guest-vg" with /dev/sdb
- Create two lvs guest-lv-1 and guest-lv-2 on gust-vg

Inside the vms:

# lvs -o vg_name,vg_uuid,lv_name,lv_uuid
  VG       VG UUID                                LV         LV UUID                               
  fedora   liFIYM-q06r-qZbk-uir4-INAQ-3cV2-DQskOI root       bbkhGt-Ryjh-iY0T-1Zqt-IvHt-6YjD-3ggIYg
  fedora   liFIYM-q06r-qZbk-uir4-INAQ-3cV2-DQskOI swap       sRGJcu-3J6m-bP6s-zD78-r7mH-sItY-b7ldde
  guest-vg UcustB-ynZx-hEPt-2nc1-SpOc-RYGX-5elvWe guest-lv-1 CnCvrx-yIVJ-au7S-mbks-WtAx-0sWy-vdlZje
  guest-vg UcustB-ynZx-hEPt-2nc1-SpOc-RYGX-5elvWe guest-lv-2 LeyTNx-hZf4-1j83-clu5-R60s-yjN2-Cyc7lD

# vgs -o vg_name,vg_uuid,pv_name,pv_uuid
  VG       VG UUID                                PV         PV UUID                               
  fedora   liFIYM-q06r-qZbk-uir4-INAQ-3cV2-DQskOI /dev/sda2  Pqrh8C-SYBe-IIzs-ko8b-BSf7-ykrS-c6bqGZ
  guest-vg UcustB-ynZx-hEPt-2nc1-SpOc-RYGX-5elvWe /dev/sdb   O594Ad-cQeS-9E8K-HMhj-z4px-wJhe-0pAAGd

And again clone the vm, creating duplicate PV, VG, and LVs.

Start both vms - activating both raw disks on the host

On the host:

# lvs -o lv_name,lv_uuid,vg_name,vg_uuid guest-vg
  WARNING: PV O594Ad-cQeS-9E8K-HMhj-z4px-wJhe-0pAAGd on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/19aa598f-3690-4301-a85b-ee31a7531eec was already found on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/291ede52-3c55-410c-b8f0-3ec4dc3127cc.
  WARNING: PV O594Ad-cQeS-9E8K-HMhj-z4px-wJhe-0pAAGd prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/291ede52-3c55-410c-b8f0-3ec4dc3127cc because device was seen first.
  LV         LV UUID                                VG       VG UUID                               
  guest-lv-1 CnCvrx-yIVJ-au7S-mbks-WtAx-0sWy-vdlZje guest-vg UcustB-ynZx-hEPt-2nc1-SpOc-RYGX-5elvWe
  guest-lv-2 LeyTNx-hZf4-1j83-clu5-R60s-yjN2-Cyc7lD guest-vg UcustB-ynZx-hEPt-2nc1-SpOc-RYGX-5elvWe

Reboot the host - during boot lvm will try to activate the guest lvs.

During boot:

# journalctl -b
Jul 02 23:38:11 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting LVM2 PV scan on device 253:21...
Jul 02 23:38:11 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting LVM2 PV scan on device 253:19...
Jul 02 23:38:11 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Started LVM2 PV scan on device 253:21.
Jul 02 23:38:11 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Started LVM2 PV scan on device 253:19.
Jul 02 23:38:11 grey-vdsc.eng.lab.tlv.redhat.com lvm[1153]: 2 logical volume(s) in volume group "guest-vg-2" now active
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1153]: 2 logical volume(s) in volume group "guest-vg-1" now active
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Started Activation of LVM2 logical volumes.
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Reached target Encrypted Volumes.
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting Encrypted Volumes.
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting Activation of LVM2 logical volumes...
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com multipathd[913]: 3600a09803830355a332b47677750717a: sde - tur checker reports path is up
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com multipathd[913]: 8:64: reinstated
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com multipathd[913]: 3600a09803830355a332b47677750717a: remaining active paths: 4
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: WARNING: PV O594Ad-cQeS-9E8K-HMhj-z4px-wJhe-0pAAGd on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/291ede52-3c55-410c-b8f0-3ec4dc3127
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: WARNING: PV O594Ad-cQeS-9E8K-HMhj-z4px-wJhe-0pAAGd prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/19aa598f-3690-4301-a85
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: 2 logical volume(s) in volume group "guest-vg" now active
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: 3 logical volume(s) in volume group "vg0" now active
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: 13 logical volume(s) in volume group "f6cb830a-cfee-410c-9d7f-f449926f3022" now active
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: 2 logical volume(s) in volume group "guest-vg-2" now active
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com lvm[1291]: 2 logical volume(s) in volume group "guest-vg-1" now active
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Started Activation of LVM2 logical volumes.
Jul 02 23:38:12 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1310]: WARNING: PV O594Ad-cQeS-9E8K-HMhj-z4px-wJhe-0pAAGd on /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/291ede52-3c55-410c-b8f0-3ec4dc3127
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1310]: WARNING: PV O594Ad-cQeS-9E8K-HMhj-z4px-wJhe-0pAAGd prefers device /dev/f6cb830a-cfee-410c-9d7f-f449926f3022/19aa598f-3690-4301-a85
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1310]: 2 logical volume(s) in volume group "guest-vg" monitored
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1310]: 3 logical volume(s) in volume group "vg0" monitored
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1310]: 13 logical volume(s) in volume group "f6cb830a-cfee-410c-9d7f-f449926f3022" monitored
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1310]: 2 logical volume(s) in volume group "guest-vg-2" monitored
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com lvm[1310]: 2 logical volume(s) in volume group "guest-vg-1" monitored
Jul 02 23:38:13 grey-vdsc.eng.lab.tlv.redhat.com systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.

On the host after reboot:

# lvs
  LV                                   VG                                   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  19aa598f-3690-4301-a85b-ee31a7531eec f6cb830a-cfee-410c-9d7f-f449926f3022 -wi-ao----   4.00g                                                    
  1d6bfc1b-f036-42eb-bf16-bf57b91951f6 f6cb830a-cfee-410c-9d7f-f449926f3022 -wi-------   8.00g                                                    
  291ede52-3c55-410c-b8f0-3ec4dc3127cc f6cb830a-cfee-410c-9d7f-f449926f3022 -wi-------   4.00g                                                    
  guest-lv-1                           guest-vg                             -wi-a-----   1.00g                                                    
  guest-lv-2                           guest-vg                             -wi-a-----   1.00g                                                    

- one of the ovirt raw lv clones was activated and is open
- guest lvs are active


So it is easy to create duplicate PV/VG/LV, but I'm not sure if this
this is harmful except the confusing warning.

The original reporter wrote that
"It can lead to the environment being non-operational after hypervisor reboot"

I don't see how this can happen.

Roman, do you have more info on this bug and how duplicate names/uuids
are harmful except the warnings?

Comment 21 Nir Soffer 2017-07-02 21:36:34 UTC
Zdenek, do you know if duplicate pv/vg/lv described in comment 20 are harmful 
except the warnings during boot and when running lvm commands?

I think we can eliminate them by introducing lvm filter whitelisting the devices
used by the host, and we plan to do this because of other bugs (see bug 1450114)
but I want to understand first the severity of this issue.

Comment 22 Zdenek Kabelac 2017-07-04 16:30:56 UTC
Yes - duplicated  PV is mostly always harmful - thought the exact impact always depends on an individual case.

However user should always create filters in a way command is not reporting/warning about duplicates.

Comment 23 Roman Hodain 2017-07-26 12:02:40 UTC
I assume that this could happen if the guest system has VG01 with LV01 and in the same time the hypervisor uses different VG01 with LV01. I guess in the case the LV could get lets "swap" and no ther filesystem will get mounted on the hypervisor. So it is not tath about duplicated PVs, but rather about Duplicated names of the VGs and LVs. But this is just my understanding.

Comment 24 Nir Soffer 2017-12-06 17:57:59 UTC
This issue is prevented by applying a proper lvm filter.

We introduced a new vdsm-tool command, "config-lvm-filter", automating lvm
configuration. If you use block storage you should configure lvm filter properly
on all hosts.

See https://ovirt.org/blog/2017/12/lvm-configuration-the-easy-way/

Comment 25 Sandro Bonazzola 2017-12-12 14:37:24 UTC
Nir, is this included in 4.2.0 RC2 build? If yes, can you please adjust target-milestone and bug status?

Comment 26 Nir Soffer 2017-12-12 15:45:10 UTC
All the patches are included in 4.20.9 - except:
- https://gerrit.ovirt.org/85126
- https://gerrit.ovirt.org/85127

These are bug fixes to the new code, added after 40.20.9 was built.

We can cherry-pick them into 4.20.9.2 if needed.

Comment 27 Sandro Bonazzola 2017-12-20 13:30:27 UTC
(In reply to Nir Soffer from comment #26)
> All the patches are included in 4.20.9 - except:
> - https://gerrit.ovirt.org/85126
> - https://gerrit.ovirt.org/85127
> 
> These are bug fixes to the new code, added after 40.20.9 was built.
> 
> We can cherry-pick them into 4.20.9.2 if needed.

No, just wanted to make sure target milestone is correctly set, thanks for checking.

Comment 28 RHV bug bot 2018-01-05 16:58:52 UTC
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[No relevant external trackers attached]

For more info please contact: rhv-devops

Comment 29 RHV bug bot 2018-01-12 14:39:21 UTC
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[No relevant external trackers attached]

For more info please contact: rhv-devops

Comment 30 Kevin Alon Goldblatt 2018-02-05 15:59:48 UTC
Verified with the following code:
--------------------------------------
ovirt-engine-4.2.1.3-0.1.el7.noarch
vdsm-4.20.17-32.git9b853be.el7.centos.x86_64

Verified with the following scenario:
--------------------------------------
1. Created 2 VMs with a Direct FC LUN attached to each as well as a regular block disk in addition to the OS disk
2. Start the VM and create a pv, vg and lv on the direct lun and the block disk with the names guest_vg, guest_lv and gues_vg1, guest_lv1 on each guest
3. Rebooted the host >>>>> 2 sets of duplicate LVs are reported
4. Add the filter with vdsm-tool config-lvm-filter and reboot the host again >>>>> when the host comes up the Guest lvs are no longer displayed


Moving to VERIFIED!

Comment 35 errata-xmlrpc 2018-05-15 17:49:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1489

Comment 36 Franta Kust 2019-05-16 13:08:34 UTC
BZ<2>Jira Resync


Note You need to log in before you can comment on or make changes to this bug.