Bug 2104515 - lvm system.devices ignoring VDO volumes during upgrade to RHVH 4.5.0
Summary: lvm system.devices ignoring VDO volumes during upgrade to RHVH 4.5.0
Keywords:
Status: CLOSED DUPLICATE of bug 2095588
Alias: None
Product: vdsm
Classification: oVirt
Component: General
Version: ---
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: bugs@ovirt.org
QA Contact: Lukas Svaty
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-06 14:05 UTC by Federico Sun
Modified: 2022-07-07 08:01 UTC (History)
12 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2022-07-07 07:59:45 UTC
oVirt Team: Storage
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-47002 0 None None None 2022-07-06 14:26:09 UTC

Description Federico Sun 2022-07-06 14:05:11 UTC
Description of problem:

When upgrading from rhvh-4.4.10.3 to rhvh-4.5.0.5, the latest layer will switch to use system.devices file as lvm filter. 

During upgrade, the generated 'system.devices' by `vdsm-tool config-lvm-filter -y` will ignore a gluster VG on a VDO volume.

imgbased.log during upgrade:
~~~
2022-07-06 13:26:40,690 [DEBUG] (MainThread) Executing: ['nsenter', '--root=/tmp/mnt.QvpGq//', '--wd=/tmp/mnt.QvpGq//', 'vdsm-tool', 'config-lvm-filter', '-y']
2022-07-06 13:26:41,708 [DEBUG] (MainThread) Result: b'Analyzing host...\nFound these mounted logical volumes on this host:\n\n  logical volume:  /dev/mapper/rhvh-rhvh--4.5.0.5--0.20220529.0+1\n  mountpoint:      /\n  devices:         /dev/vda2\n\n  logical volume:  /dev/mapper/rhvh-swap\n  mountpoint:      [SWAP]\n  devices:         /dev/vda2\n\nConfiguring LVM system.devices.\nDevices for following VGs will be imported:\n    \n rhvh\n\nConfiguration completed successfully!\n\nPlease reboot to verify the configuration.\n    \n'
~~~

After booting into the new 4.5.0 layer, gluster services can't be started because its VG (on a vdo volume) is filter out by etc/lvm/devices/system.devices file. This is impacting RHHI host upgrades.

A quick workaround is to run `vgimportdevices` and the gluster VG will be generated correctly to system.devices. This steps seems to be skipped during the upgrade.


How reproducible:
100%

Steps to Reproduce:
1. install rhvh-4.4.10.3
2. add a new disk to rhvh-4.4.10.3 and setup vdo:
     
  $ vdo create --name=vdo1 --device=/dev/vdb 
  $ pvcreate /dev/mapper/vdo1
  $ vgcreate vdo_vg /dev/mapper/vdo1
  $ lvcreate -L1G -n vdo_lv vdo_vg    

vdb                                         252:16   0    6G  0 disk
`-vdo1                                      253:13   0    2G  0 vdo
  `-vdo_vg-vdo_lv                           253:14   0    1G  0 lvm

3. format and add it to fstab:

  $ mkfs.xfs /dev/mapper/vdo_vg-vdo_lv
  

4. Install rhvh-4.5.0.5 and reboot 


Actual results:

Booting up 4.5.0, the 'vdo_vg' is not activated because the generated system.devices file only contains 'rhvh'.

Expected results:

During upgrade any VG on top of a VDO volume is generated for 'system.devices' as well.

Comment 1 Sandro Bonazzola 2022-07-07 06:44:40 UTC
Moved to storage team on vdsm for investigation. The change doesn't seem related to RHV-H only.

Comment 2 Arik 2022-07-07 07:23:13 UTC
Albert, duplicate of bz 2095588?

Comment 3 Albert Esteve 2022-07-07 07:52:07 UTC
Yes, matches exactly what is observed in  bz 2095588 .

Comment 4 Arik 2022-07-07 07:59:45 UTC

*** This bug has been marked as a duplicate of bug 2095588 ***


Note You need to log in before you can comment on or make changes to this bug.