Bug 2012830 - [RFE] Use LVM devices file instead of LVM filter to manage storage devices
Summary: [RFE] Use LVM devices file instead of LVM filter to manage storage devices
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: General
Version: 4.50
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.5.0
: 4.50.0.10
Assignee: Vojtech Juranek
QA Contact: Shir Fishbain
URL:
Whiteboard:
Depends On:
Blocks: 2016173
TreeView+ depends on / blocked
 
Reported: 2021-10-11 11:52 UTC by Nir Soffer
Modified: 2022-09-07 10:33 UTC (History)
9 users (show)

Fixed In Version: vdsm-4.50.0.10
Doc Type: Enhancement
Doc Text:
You can now use the Logical Volume Management (LVM) devices file for managing storage devices instead of LVM filter, which can be complicated to set up and difficult to manage. Starting with RHEL 8.6, this will be the default for storage device management.
Clone Of:
Environment:
Last Closed: 2022-05-23 06:21:25 UTC
oVirt Team: Storage
Embargoed:
sbonazzo: ovirt-4.5?
pm-rhel: planning_ack?
pm-rhel: devel_ack+
pm-rhel: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt vdsm pull 27 0 None open lvm devices file configuration 2022-01-06 17:03:43 UTC
Github oVirt vdsm pull 49 0 None Merged Use lvm devices in lvm commands instead of filter 2022-02-22 13:29:58 UTC
Github oVirt vdsm pull 68 0 None Merged Don't use hardcoded value for lvm devices file 2022-02-22 13:30:23 UTC
Github oVirt vdsm pull 73 0 None Merged lvm: use devices file as default config method 2022-02-22 13:30:45 UTC
Red Hat Issue Tracker RHV-43788 0 None None None 2021-10-11 12:06:53 UTC

Description Nir Soffer 2021-10-11 11:52:11 UTC
lvmdevices is the modern way to manage devices, replacing lvm filter.
Optional in RHEL since 8.5 and default in RHEL/Centos Stream 9.

Due to bug in anaconda, lvmdevices are *not* enabled by default in current Centos Stream 9
builds. This should be fixed in future builds. Once this is fixed, lvm filter configured
using "vdsm-tool config-lvm-filter" and used by vdsm commands will be ignored, restoring
all the issues related to running without lvm filter.

To support Centos Stream 9, we should update vdsm to use lvm devices instead of lvm filter.
To simplify code and support, we should use lvm devices also on RHEL 8.6.

This change is required to support block storage on Centos Stream 9.

## Benefits

To admin
- No need to edit lvm filter - hard to do right.

To oVirt
- No need to create and verify lvm filter
- The issue of user modified lvm filter is gone (currently fails vdsm-tool config-lvm-filter)
- Simplify vdsm lvm code - use same code and upgrade flows on all supported distros
  (RHEL 8, Centos Stream 8, Centos Stream 9)

## How lvm devices work

During installation, devices/use_devicesfile = 1 is set in lvm.conf. When this is enabled
and the system.devices file exists, lvm filter is ignored.

When an admin adds a new PV or extend a VG, the PV is added to system.devices file
(/etc/lvm/devices/system.devices). The system will probe and scan only devices listed in
the system.devices file.

The admin does not have to create a complex and fragile lvm filter to ensure that the system
does not touch devices it should not.

oVirt does not need to create an lvm filter when adding a host, and the issue of user modified
lvm filter that oVirt does not know how to verify does not exist.

Manual: https://man.archlinux.org/man/lvmdevices.8.en

## Flows

### Add host

run vdsm-tool configure to configure lvm and multipath
there is no need to run vdsm-tool config-lvm-filter

### vdsm-tool is-configured

- Fail if devices/use_devices_file is not enabled
- Fail if system.devices file is not configured

### vdsm-tool configure

- lvm
  - Enable devices/use_devicesfile = 1
  - If devices file was not enabled, create initial devices file
    (vgimportdevices vg-name)
  - If host is using lvm filter, convert the filter to devices file

- multipath
  - Update blacklist, currently part of vdsm-tool config-lvm-filter

### Running vdsm lvm commands

- Use --devices instead of filter in --config

Replace:

    --config '{devices { filter = "a|^/dev/mapper/x$|", "a|^/dev/mapper/y$|", "r|.*|" ] } }'

with:

    --devices /dev/mapper/x,/dev/mapper/y

- Remove code to format lvm filter

### Upgrade host using lvm filter

- Run vdsm-tool configure to convert lvm filter to devices file

## Cleanups

- Remove some code related to lvm filter (some should remain to support upgrades)

## Limitations

LVM supports disabling lvm devices on RHEL 9 and using lvm filter - same as RHEL 8.
We will not support this to simplify code and support. Configuring a host as oVirt
host will enable lvm devices.

## Open issues

In the past configuration order of multipath blacklist and lvm filter was important,
in particular when booting from SAN. Need to check if this can be handled by running
vdsm-tool configure, or we need a special command like config-lvm-filter to ensure
the order of the changes.

## Examples

### lvm command ignoring lvm filter when devices file is used

# lvs --config 'devices {filter = ["r|.*|"]}'
  Please remove the lvm.conf filter, it is ignored with the devices file.
  LV   VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root cs -wi-ao---- <16.95g
  swap cs -wi-ao----  <2.05g

### devices file for virtio disk

# cat /etc/lvm/devices/system.devices 
# LVM uses devices listed in this file.
# Created by LVM command vgimportdevices pid 1382 at Mon Oct  4 17:09:19 2021
VERSION=1.1.1
IDTYPE=devname IDNAME=/dev/vda2 DEVNAME=/dev/vda2 PVID=5qo7HUmKtUQTRLo7iWuIGFq3WSiImir3 PART=2

Comment 2 Vojtech Juranek 2022-02-22 14:13:13 UTC
The system behavior with this feature implemented is as follows:
* lvm devices file is enabled by default during installation of new host or host update
* if there is any lvm filter, the filter is removed during upgrade
* if the admin want to stay with lvm filter or wants to switch back from using devices file back to lvm filter, admin can create custom vdsm confing with config_method=filter in [lvm] section, e.g.:

    $ cat /etc/vdsm/vdsm.conf.d/99-lvm.conf
    [lvm]
    config_method= filter

run 

    vdsm-tool config-lvm-filter -y

and reboot the host. Switching back to devices file can be done by removing this custom config or setting config_method=devices.

By default, /etc/lvm/devices/system.devices file is used as the devices file, but it can be changed by the user by setting devices/devicesfile in lvm config.

vdsm-tool config-lvm-filter (re)configures lvm if either devices/use_devicesfile is set to 0 in lvm config or file specified by devices/devicesfile in lvm config doesn't exists.

Instead of using specific filter in lvm commands, vdsm now specifies devices on which lvm command should act using --devices $device option, which admin can observe in vdsm logs instead of using filter=["..."].

Any other behavior should be the same as in case of using lvm filter (i.e. use is not supposed to configure anything manually, vdsm default should just work fine, and all the flows should work as before).

QA can test it by running all the storage test on block based storage.
This should be tested with RHEL host as well as with oVirt node.
QA should also test that RHEL host and oVirt host is able to boot with lvm devices file configured by vdsm, especially when booting from multipath devices (see e.g. BZ #2016173 as an example when host can have issues when booting from multipath device).

Comment 4 Shir Fishbain 2022-05-04 09:04:30 UTC
Verified

* lvm devices file is enabled by default during installation of a new host or host update when upgraded from 4.4.10 to 4.5.0 :
When the host is still in 4.4 Version:
[root@storage-ge8-vdsm2 yum.repos.d]# cat /etc/lvm/lvm.conf | grep '^filter ='
filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-bfzQnM-XP3k-um3p-HdhH-28OZ-k4Px-512Sj9$|", "r|.*|"]

When the host with 4.5 version ( vdsm-4.50.0.13-1.el8ev.x86_64) by default we get the lvmdevices option (also appears in /etc/lvm/lvm.conf - use_devicesfile = 1) and using devises.system file :

[root@storage-ge8-vdsm1 lvm]# cat /etc/lvm/devices/system.devices 
# LVM uses devices listed in this file.
# Created by LVM command vgimportdevices pid 1589794 at Wed May  4 11:30:38 2022
VERSION=1.1.1
IDTYPE=devname IDNAME=/dev/sda3 DEVNAME=/dev/sda3 PVID=9YRp7NuSPP2zNAaSR2WPzMS64SmqzaG7 PART=3

* the lvm filter is removed during the upgrade
[root@storage-ge8-vdsm1 lvm]# vdsm-tool config-lvm-filter
Analyzing host...
LVM devices already configured for vdsm

versions:
vdsm-4.50.0.13-1.el8ev.x86_64
ovirt-engine-4.5.0.5-0.7.el8ev.noarch

Comment 5 bkaraore 2022-07-01 10:20:16 UTC
We have discovered three times an issue during the upgrade of RHV Hypervisors used for the RHHI environment. 
When we start the upgrade of Hypervisor, and by default reboot at the end of the upgrade, the hypervisor is stuck in maintenance mode as it could not find devices in /etc/fstab which are used for Gluster.
When checking details, I believe there is a logic issue during the upgrade.

1. If you try to add devices to /etc/lvm/devices/system.devices file before removing the LVM filter, it will complain about filters and will not add any devices to the file. 
2. After that situation, if we update LVM packages, enable device usage by default, and remove LVM filter devices, that would succeed without having any devices in the file in 1 step.
3. During reboot OS will complain about the devices that are wanted for mount operation in /etc/fstab and will not boot and remain in maintenance mode.

Please consider this and fix it.

Comment 6 Arik 2022-07-01 11:29:32 UTC
(In reply to bkaraore from comment #5)
> We have discovered three times an issue during the upgrade of RHV
> Hypervisors used for the RHHI environment. 
> When we start the upgrade of Hypervisor, and by default reboot at the end of
> the upgrade, the hypervisor is stuck in maintenance mode as it could not
> find devices in /etc/fstab which are used for Gluster.
> When checking details, I believe there is a logic issue during the upgrade.
> 
> 1. If you try to add devices to /etc/lvm/devices/system.devices file before
> removing the LVM filter, it will complain about filters and will not add any
> devices to the file. 
> 2. After that situation, if we update LVM packages, enable device usage by
> default, and remove LVM filter devices, that would succeed without having
> any devices in the file in 1 step.
> 3. During reboot OS will complain about the devices that are wanted for
> mount operation in /etc/fstab and will not boot and remain in maintenance
> mode.
> 
> Please consider this and fix it.

Thanks for reporting this.
I believe it is not the right place though, please check out bz 2095588 - any feedback on the ideas that are proposed there or more information you can provide (or ideally an environment this reproduces on that we can inspect) would be appreciated because this issue didn't reproduce on our upgrade tests so far

Comment 7 Nir Soffer 2022-07-04 11:31:00 UTC
(In reply to bkaraore from comment #5)
If bug 2095588 is not the same issue, please file a new bug.
If it is the same issue, please add more details how to reproduce it on that bug.

Comment 10 bkaraore 2022-09-07 10:33:10 UTC
(In reply to Nir Soffer from comment #7)
> (In reply to bkaraore from comment #5)
> If bug 2095588 is not the same issue, please file a new bug.
> If it is the same issue, please add more details how to reproduce it on that
> bug.

No, this is not a direct issue with RHEL. 
And it seems this is one time issue, and wont fix on 2095588.


Note You need to log in before you can comment on or make changes to this bug.