Bug 2026640
Summary: | Host fail to boot if lvm filter uses /dev/disk/by-id/lvm-pv-uuid-* | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Nir Soffer <nsoffer> |
Component: | lvm2 | Assignee: | David Teigland <teigland> |
lvm2 sub component: | Udev | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | urgent | ||
Priority: | urgent | CC: | agk, bstinson, cmarthal, didi, heinzm, jbrassow, jwboyer, mcsontos, msnitzer, prajnoha, teigland, zkabelac |
Version: | CentOS Stream | ||
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.03.14-3.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-05-10 15:22:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2026370 |
Description
Nir Soffer
2021-11-25 11:31:21 UTC
Didi, can you add more details on how to reproduce this? I guess we need specific partition layout since we reproduced this only on oVirt node (RHV-H). (In reply to Nir Soffer from comment #1) > Didi, can you add more details on how to reproduce this? Not sure how to create a minimal reproducer, sorry. > > I guess we need specific partition layout since we reproduced this > only on oVirt node (RHV-H). This is what you have on a live machine installed from ovirt-node: # cat /etc/fstab # # /etc/fstab # Created by anaconda on Sun Nov 14 02:26:16 2021 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # /dev/onn_ibm-p8-kvm-03-guest-02/ovirt-node-ng-4.5.0-0.20211111.0+1 / xfs defaults,discard 0 0 UUID=2fd3bb26-3437-4b89-b24f-3a7b067c3d57 /boot xfs defaults 0 0 /dev/mapper/onn_ibm--p8--kvm--03--guest--02-home /home xfs defaults,discard 0 0 /dev/mapper/onn_ibm--p8--kvm--03--guest--02-tmp /tmp xfs defaults,discard 0 0 /dev/mapper/onn_ibm--p8--kvm--03--guest--02-var /var xfs defaults,discard 0 0 /dev/mapper/onn_ibm--p8--kvm--03--guest--02-var_crash /var/crash xfs defaults,discard 0 0 /dev/mapper/onn_ibm--p8--kvm--03--guest--02-var_log /var/log xfs defaults,discard 0 0 /dev/mapper/onn_ibm--p8--kvm--03--guest--02-var_log_audit /var/log/audit xfs defaults,discard 0 0 /dev/mapper/onn_ibm--p8--kvm--03--guest--02-swap none swap defaults 0 0 # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn_ibm-p8-kvm-03-guest-02 Vwi-aotz-- 1.00g pool00 1.04 ovirt-node-ng-4.5.0-0.20211111.0 onn_ibm-p8-kvm-03-guest-02 Vwi---tz-k <20.09g pool00 root ovirt-node-ng-4.5.0-0.20211111.0+1 onn_ibm-p8-kvm-03-guest-02 Vwi-aotz-- <20.09g pool00 ovirt-node-ng-4.5.0-0.20211111.0 31.14 pool00 onn_ibm-p8-kvm-03-guest-02 twi-aotz-- <57.09g 12.85 2.05 root onn_ibm-p8-kvm-03-guest-02 Vri---tz-k <20.09g pool00 swap onn_ibm-p8-kvm-03-guest-02 -wi-ao---- 5.93g tmp onn_ibm-p8-kvm-03-guest-02 Vwi-aotz-- 1.00g pool00 1.29 var onn_ibm-p8-kvm-03-guest-02 Vwi-aotz-- 15.00g pool00 4.62 var_crash onn_ibm-p8-kvm-03-guest-02 Vwi-aotz-- 10.00g pool00 0.11 var_log onn_ibm-p8-kvm-03-guest-02 Vwi-aotz-- 8.00g pool00 2.83 var_log_audit onn_ibm-p8-kvm-03-guest-02 Vwi-aotz-- 2.00g pool00 1.42 This is on an OST env with an ovirt-node image from a few days before we started building with the new lvm2. We can probably improve how lvm handles PV symlinks in the filter. However, that is not relevant here because this is all an effect of using the RHEL9 autoactivation code in RHEL8. I have pushed a patch upstream to handle the case of a filter accepting a PV via symlinks, where that PV is needed for autoactivation. https://sourceware.org/git/?p=lvm2.git;a=commit;h=d12baba1a9bfe2d82537b20bc768758d84b263b6 Marking this Verified:Tested (SanityOnly) in the latest rpms based on our boot stack regression test. To be clear QA was never able to reproduce this issue. kernel-4.18.0-355.el8.kpq0 BUILT: Wed Dec 15 13:27:55 CST 2021 lvm2-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 lvm2-libs-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 Marking VERIFIED (SanityOnly) with the latest kernel/userspace. Boot and autoactivate sanity checks passed. kernel-4.18.0-360.el8 BUILT: Sun Jan 16 20:27:55 CST 2022 lvm2-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 lvm2-libs-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2038 |