lvmdevices is the modern way to manage devices, replacing lvm filter. Optional in RHEL since 8.5 and default in RHEL/Centos Stream 9. Due to bug in anaconda, lvmdevices are *not* enabled by default in current Centos Stream 9 builds. This should be fixed in future builds. Once this is fixed, lvm filter configured using "vdsm-tool config-lvm-filter" and used by vdsm commands will be ignored, restoring all the issues related to running without lvm filter. To support Centos Stream 9, we should update vdsm to use lvm devices instead of lvm filter. To simplify code and support, we should use lvm devices also on RHEL 8.6. This change is required to support block storage on Centos Stream 9. ## Benefits To admin - No need to edit lvm filter - hard to do right. To oVirt - No need to create and verify lvm filter - The issue of user modified lvm filter is gone (currently fails vdsm-tool config-lvm-filter) - Simplify vdsm lvm code - use same code and upgrade flows on all supported distros (RHEL 8, Centos Stream 8, Centos Stream 9) ## How lvm devices work During installation, devices/use_devicesfile = 1 is set in lvm.conf. When this is enabled and the system.devices file exists, lvm filter is ignored. When an admin adds a new PV or extend a VG, the PV is added to system.devices file (/etc/lvm/devices/system.devices). The system will probe and scan only devices listed in the system.devices file. The admin does not have to create a complex and fragile lvm filter to ensure that the system does not touch devices it should not. oVirt does not need to create an lvm filter when adding a host, and the issue of user modified lvm filter that oVirt does not know how to verify does not exist. Manual: https://man.archlinux.org/man/lvmdevices.8.en ## Flows ### Add host run vdsm-tool configure to configure lvm and multipath there is no need to run vdsm-tool config-lvm-filter ### vdsm-tool is-configured - Fail if devices/use_devices_file is not enabled - Fail if system.devices file is not configured ### vdsm-tool configure - lvm - Enable devices/use_devicesfile = 1 - If devices file was not enabled, create initial devices file (vgimportdevices vg-name) - If host is using lvm filter, convert the filter to devices file - multipath - Update blacklist, currently part of vdsm-tool config-lvm-filter ### Running vdsm lvm commands - Use --devices instead of filter in --config Replace: --config '{devices { filter = "a|^/dev/mapper/x$|", "a|^/dev/mapper/y$|", "r|.*|" ] } }' with: --devices /dev/mapper/x,/dev/mapper/y - Remove code to format lvm filter ### Upgrade host using lvm filter - Run vdsm-tool configure to convert lvm filter to devices file ## Cleanups - Remove some code related to lvm filter (some should remain to support upgrades) ## Limitations LVM supports disabling lvm devices on RHEL 9 and using lvm filter - same as RHEL 8. We will not support this to simplify code and support. Configuring a host as oVirt host will enable lvm devices. ## Open issues In the past configuration order of multipath blacklist and lvm filter was important, in particular when booting from SAN. Need to check if this can be handled by running vdsm-tool configure, or we need a special command like config-lvm-filter to ensure the order of the changes. ## Examples ### lvm command ignoring lvm filter when devices file is used # lvs --config 'devices {filter = ["r|.*|"]}' Please remove the lvm.conf filter, it is ignored with the devices file. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root cs -wi-ao---- <16.95g swap cs -wi-ao---- <2.05g ### devices file for virtio disk # cat /etc/lvm/devices/system.devices # LVM uses devices listed in this file. # Created by LVM command vgimportdevices pid 1382 at Mon Oct 4 17:09:19 2021 VERSION=1.1.1 IDTYPE=devname IDNAME=/dev/vda2 DEVNAME=/dev/vda2 PVID=5qo7HUmKtUQTRLo7iWuIGFq3WSiImir3 PART=2
The system behavior with this feature implemented is as follows: * lvm devices file is enabled by default during installation of new host or host update * if there is any lvm filter, the filter is removed during upgrade * if the admin want to stay with lvm filter or wants to switch back from using devices file back to lvm filter, admin can create custom vdsm confing with config_method=filter in [lvm] section, e.g.: $ cat /etc/vdsm/vdsm.conf.d/99-lvm.conf [lvm] config_method= filter run vdsm-tool config-lvm-filter -y and reboot the host. Switching back to devices file can be done by removing this custom config or setting config_method=devices. By default, /etc/lvm/devices/system.devices file is used as the devices file, but it can be changed by the user by setting devices/devicesfile in lvm config. vdsm-tool config-lvm-filter (re)configures lvm if either devices/use_devicesfile is set to 0 in lvm config or file specified by devices/devicesfile in lvm config doesn't exists. Instead of using specific filter in lvm commands, vdsm now specifies devices on which lvm command should act using --devices $device option, which admin can observe in vdsm logs instead of using filter=["..."]. Any other behavior should be the same as in case of using lvm filter (i.e. use is not supposed to configure anything manually, vdsm default should just work fine, and all the flows should work as before). QA can test it by running all the storage test on block based storage. This should be tested with RHEL host as well as with oVirt node. QA should also test that RHEL host and oVirt host is able to boot with lvm devices file configured by vdsm, especially when booting from multipath devices (see e.g. BZ #2016173 as an example when host can have issues when booting from multipath device).
Verified * lvm devices file is enabled by default during installation of a new host or host update when upgraded from 4.4.10 to 4.5.0 : When the host is still in 4.4 Version: [root@storage-ge8-vdsm2 yum.repos.d]# cat /etc/lvm/lvm.conf | grep '^filter =' filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-bfzQnM-XP3k-um3p-HdhH-28OZ-k4Px-512Sj9$|", "r|.*|"] When the host with 4.5 version ( vdsm-4.50.0.13-1.el8ev.x86_64) by default we get the lvmdevices option (also appears in /etc/lvm/lvm.conf - use_devicesfile = 1) and using devises.system file : [root@storage-ge8-vdsm1 lvm]# cat /etc/lvm/devices/system.devices # LVM uses devices listed in this file. # Created by LVM command vgimportdevices pid 1589794 at Wed May 4 11:30:38 2022 VERSION=1.1.1 IDTYPE=devname IDNAME=/dev/sda3 DEVNAME=/dev/sda3 PVID=9YRp7NuSPP2zNAaSR2WPzMS64SmqzaG7 PART=3 * the lvm filter is removed during the upgrade [root@storage-ge8-vdsm1 lvm]# vdsm-tool config-lvm-filter Analyzing host... LVM devices already configured for vdsm versions: vdsm-4.50.0.13-1.el8ev.x86_64 ovirt-engine-4.5.0.5-0.7.el8ev.noarch
We have discovered three times an issue during the upgrade of RHV Hypervisors used for the RHHI environment. When we start the upgrade of Hypervisor, and by default reboot at the end of the upgrade, the hypervisor is stuck in maintenance mode as it could not find devices in /etc/fstab which are used for Gluster. When checking details, I believe there is a logic issue during the upgrade. 1. If you try to add devices to /etc/lvm/devices/system.devices file before removing the LVM filter, it will complain about filters and will not add any devices to the file. 2. After that situation, if we update LVM packages, enable device usage by default, and remove LVM filter devices, that would succeed without having any devices in the file in 1 step. 3. During reboot OS will complain about the devices that are wanted for mount operation in /etc/fstab and will not boot and remain in maintenance mode. Please consider this and fix it.
(In reply to bkaraore from comment #5) > We have discovered three times an issue during the upgrade of RHV > Hypervisors used for the RHHI environment. > When we start the upgrade of Hypervisor, and by default reboot at the end of > the upgrade, the hypervisor is stuck in maintenance mode as it could not > find devices in /etc/fstab which are used for Gluster. > When checking details, I believe there is a logic issue during the upgrade. > > 1. If you try to add devices to /etc/lvm/devices/system.devices file before > removing the LVM filter, it will complain about filters and will not add any > devices to the file. > 2. After that situation, if we update LVM packages, enable device usage by > default, and remove LVM filter devices, that would succeed without having > any devices in the file in 1 step. > 3. During reboot OS will complain about the devices that are wanted for > mount operation in /etc/fstab and will not boot and remain in maintenance > mode. > > Please consider this and fix it. Thanks for reporting this. I believe it is not the right place though, please check out bz 2095588 - any feedback on the ideas that are proposed there or more information you can provide (or ideally an environment this reproduces on that we can inspect) would be appreciated because this issue didn't reproduce on our upgrade tests so far
(In reply to bkaraore from comment #5) If bug 2095588 is not the same issue, please file a new bug. If it is the same issue, please add more details how to reproduce it on that bug.
(In reply to Nir Soffer from comment #7) > (In reply to bkaraore from comment #5) > If bug 2095588 is not the same issue, please file a new bug. > If it is the same issue, please add more details how to reproduce it on that > bug. No, this is not a direct issue with RHEL. And it seems this is one time issue, and wont fix on 2095588.