Bug 2012830
Summary: | [RFE] Use LVM devices file instead of LVM filter to manage storage devices | ||
---|---|---|---|
Product: | [oVirt] vdsm | Reporter: | Nir Soffer <nsoffer> |
Component: | General | Assignee: | Vojtech Juranek <vjuranek> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Shir Fishbain <sfishbai> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.50 | CC: | aefrat, ahadas, apinnick, bkaraore, bugs, ddacosta, sfishbai, usurse, vjuranek |
Target Milestone: | ovirt-4.5.0 | Keywords: | FutureFeature |
Target Release: | 4.50.0.10 | Flags: | sbonazzo:
ovirt-4.5?
pm-rhel: planning_ack? pm-rhel: devel_ack+ pm-rhel: testing_ack+ |
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | vdsm-4.50.0.10 | Doc Type: | Enhancement |
Doc Text: |
You can now use the Logical Volume Management (LVM) devices file for managing storage devices instead of LVM filter, which can be complicated to set up and difficult to manage. Starting with RHEL 8.6, this will be the default for storage device management.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-05-23 06:21:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2016173 |
Description
Nir Soffer
2021-10-11 11:52:11 UTC
The system behavior with this feature implemented is as follows: * lvm devices file is enabled by default during installation of new host or host update * if there is any lvm filter, the filter is removed during upgrade * if the admin want to stay with lvm filter or wants to switch back from using devices file back to lvm filter, admin can create custom vdsm confing with config_method=filter in [lvm] section, e.g.: $ cat /etc/vdsm/vdsm.conf.d/99-lvm.conf [lvm] config_method= filter run vdsm-tool config-lvm-filter -y and reboot the host. Switching back to devices file can be done by removing this custom config or setting config_method=devices. By default, /etc/lvm/devices/system.devices file is used as the devices file, but it can be changed by the user by setting devices/devicesfile in lvm config. vdsm-tool config-lvm-filter (re)configures lvm if either devices/use_devicesfile is set to 0 in lvm config or file specified by devices/devicesfile in lvm config doesn't exists. Instead of using specific filter in lvm commands, vdsm now specifies devices on which lvm command should act using --devices $device option, which admin can observe in vdsm logs instead of using filter=["..."]. Any other behavior should be the same as in case of using lvm filter (i.e. use is not supposed to configure anything manually, vdsm default should just work fine, and all the flows should work as before). QA can test it by running all the storage test on block based storage. This should be tested with RHEL host as well as with oVirt node. QA should also test that RHEL host and oVirt host is able to boot with lvm devices file configured by vdsm, especially when booting from multipath devices (see e.g. BZ #2016173 as an example when host can have issues when booting from multipath device). Verified * lvm devices file is enabled by default during installation of a new host or host update when upgraded from 4.4.10 to 4.5.0 : When the host is still in 4.4 Version: [root@storage-ge8-vdsm2 yum.repos.d]# cat /etc/lvm/lvm.conf | grep '^filter =' filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-bfzQnM-XP3k-um3p-HdhH-28OZ-k4Px-512Sj9$|", "r|.*|"] When the host with 4.5 version ( vdsm-4.50.0.13-1.el8ev.x86_64) by default we get the lvmdevices option (also appears in /etc/lvm/lvm.conf - use_devicesfile = 1) and using devises.system file : [root@storage-ge8-vdsm1 lvm]# cat /etc/lvm/devices/system.devices # LVM uses devices listed in this file. # Created by LVM command vgimportdevices pid 1589794 at Wed May 4 11:30:38 2022 VERSION=1.1.1 IDTYPE=devname IDNAME=/dev/sda3 DEVNAME=/dev/sda3 PVID=9YRp7NuSPP2zNAaSR2WPzMS64SmqzaG7 PART=3 * the lvm filter is removed during the upgrade [root@storage-ge8-vdsm1 lvm]# vdsm-tool config-lvm-filter Analyzing host... LVM devices already configured for vdsm versions: vdsm-4.50.0.13-1.el8ev.x86_64 ovirt-engine-4.5.0.5-0.7.el8ev.noarch We have discovered three times an issue during the upgrade of RHV Hypervisors used for the RHHI environment. When we start the upgrade of Hypervisor, and by default reboot at the end of the upgrade, the hypervisor is stuck in maintenance mode as it could not find devices in /etc/fstab which are used for Gluster. When checking details, I believe there is a logic issue during the upgrade. 1. If you try to add devices to /etc/lvm/devices/system.devices file before removing the LVM filter, it will complain about filters and will not add any devices to the file. 2. After that situation, if we update LVM packages, enable device usage by default, and remove LVM filter devices, that would succeed without having any devices in the file in 1 step. 3. During reboot OS will complain about the devices that are wanted for mount operation in /etc/fstab and will not boot and remain in maintenance mode. Please consider this and fix it. (In reply to bkaraore from comment #5) > We have discovered three times an issue during the upgrade of RHV > Hypervisors used for the RHHI environment. > When we start the upgrade of Hypervisor, and by default reboot at the end of > the upgrade, the hypervisor is stuck in maintenance mode as it could not > find devices in /etc/fstab which are used for Gluster. > When checking details, I believe there is a logic issue during the upgrade. > > 1. If you try to add devices to /etc/lvm/devices/system.devices file before > removing the LVM filter, it will complain about filters and will not add any > devices to the file. > 2. After that situation, if we update LVM packages, enable device usage by > default, and remove LVM filter devices, that would succeed without having > any devices in the file in 1 step. > 3. During reboot OS will complain about the devices that are wanted for > mount operation in /etc/fstab and will not boot and remain in maintenance > mode. > > Please consider this and fix it. Thanks for reporting this. I believe it is not the right place though, please check out bz 2095588 - any feedback on the ideas that are proposed there or more information you can provide (or ideally an environment this reproduces on that we can inspect) would be appreciated because this issue didn't reproduce on our upgrade tests so far (In reply to bkaraore from comment #5) If bug 2095588 is not the same issue, please file a new bug. If it is the same issue, please add more details how to reproduce it on that bug. (In reply to Nir Soffer from comment #7) > (In reply to bkaraore from comment #5) > If bug 2095588 is not the same issue, please file a new bug. > If it is the same issue, please add more details how to reproduce it on that > bug. No, this is not a direct issue with RHEL. And it seems this is one time issue, and wont fix on 2095588. |