Bug 2095588
Summary: | RHV/RHHI 4.4 -> 4.5 upgrade results in maintenance mode due to LVM use_devicesfile = 1 | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Sean Haselden <shaselde> |
Component: | vdsm | Assignee: | Albert Esteve <aesteve> |
Status: | CLOSED WONTFIX | QA Contact: | Lukas Svaty <lsvaty> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.5.0 | CC: | aesteve, ahadas, bkaraore, bugs, bzlotnik, fsun, jean-louis, lsurette, lveyde, mavital, mkalinin, mwaykole, nsoffer, schandle, sfishbai, srevivo, swachira, teigland, vdas, vpapnoi, ycui, ymankad |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | 2090169 | Environment: | |
Last Closed: | 2022-07-11 14:22:12 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2090169 | ||
Bug Blocks: |
Description
Sean Haselden
2022-06-09 22:43:33 UTC
Based on the info in description 0, this is not the same issue as bug 2090169. In that case removing the wwid from /etc/multipath/wwid should avoid this issue. This may be an issue with luks encrypted devices, I don't think this was tested or even considered in "vdsm-tool config-lvm-filter" tool. We need to reproduce by building such RHV-H system. It can help if we get the output of "vdsm-tool config-lvm-filter" when running on such host before the upgrade. For the way to fix such system, we can simplify it by disabling the devices file temporarily, and import only the required vgs. The example give is very risky if the host is had FC storage connected - it can import RHV stoarge domain vgs, and even guest vgs from active lvs for raw disks. Fixing instructions: 1. Activate the needed vgs, disabling the devices file temporarily: vgchange --devicesfile= -ay gluster_vg_luks_sdb gluster_vg_sdc 2. Import the devices to the system devices file vgimportdevices gluster_vg_luks_sdb gluster_vg_sdc We're still looking for an environment that this happens on Lev, please check comment 22 and comment 23. I think this should be fixed in imagebased (bind mount /gluster_bricks in the chroot?). (In reply to Sean Haselden from comment #0) > In the rescue shell it was clear that it didn't boot because LVM did not > activate the gluster devices, and it failed to mount the related filsystems: This is explained by comment 23 and comment 23. So this is a new issue and not related to bug 2090169. (In reply to Nir Soffer from comment #24) > Lev, please check comment 22 and comment 23. I think this should be fixed > in imagebased (bind mount /gluster_bricks in the chroot?). Shouldn't it access/detect it through /dev , just as it does with the LVM based volumes? *** Bug 2104515 has been marked as a duplicate of this bug. *** (In reply to Lev Veyde from comment #26) > (In reply to Nir Soffer from comment #24) > > Lev, please check comment 22 and comment 23. I think this should be fixed > > in imagebased (bind mount /gluster_bricks in the chroot?). > > Shouldn't it access/detect it through /dev , just as it does with the LVM > based volumes? No, it need to see the mounts to detect the required lvs. Run lsblk in the chroot - if it does not show the mountpoints for lvs, the lvs are not considered for creating filter/adding to devices file. temporary solution without Ansible Before upgrading following procedure can be also applied to avoid hypervisor boot issues. * Remove LVM filters. ~~~ # sed -i /^filter/d /etc/lvm/lvm.conf ~~~ * Enable system devices. Search *Allow_mixed_block_sizes* in */etc/lvm/lvm.conf* file and add a new line after it as follows. ~~~ # sed '/^Allow_mixed_block_sizes = 0/a use_devicesfile = 1' /etc/lvm/lvm.conf ~~~ * Populate system devices ~~~ # vgimportdevices -a ~~~ Continue with upgrade will not have any issue after that. The attached KCS was validated, please checkout a minor suggestion to improve it in comment 35 Since we don't have an easy way to handle that and this is a one-time issue (once fixed, it won't reproduce on future upgrades), following the KCS is the best way to go (In reply to Arik from comment #38) > The attached KCS was validated, please checkout a minor suggestion to > improve it in comment 35 Only minor point I'd make to the KCS solution is maybe using this in step 3: # vgimportdevices <volume group name> (In reply to Arik from comment #39) > (In reply to Arik from comment #38) > > The attached KCS was validated, please checkout a minor suggestion to > > improve it in comment 35 > > Only minor point I'd make to the KCS solution is maybe using this in step 3: > # vgimportdevices <volume group name> I assume this command will leave the current disks in the devices file and add the ones specified as part of "<volume group name>"? If so I can make the edit. (In reply to Sean Haselden from comment #40) > (In reply to Arik from comment #39) > > (In reply to Arik from comment #38) > > > The attached KCS was validated, please checkout a minor suggestion to > > > improve it in comment 35 > > > > Only minor point I'd make to the KCS solution is maybe using this in step 3: > > # vgimportdevices <volume group name> > > I assume this command will leave the current disks in the devices file and > add the ones specified as part of "<volume group name>"? If so I can make > the edit. Yes, that is correct. vgimportdevices creates the devicesfile if none exists, and appends new devices individually. Actually, vdsm-tool invokes vgimportdevices in a loop for the proper devices when we do "vdsm-tool config-lvm-filter". |