Description of problem: kernel module mismatches tend to happen quite a bit. They should let the user know of a return code failure. [root@virt-540 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root rhel_virt-540 -wi-ao---- <6.20g /dev/vda2(205) swap rhel_virt-540 -wi-ao---- 820.00m /dev/vda2(0) vdo_pool vdo_sanity -wi-a----- 25.00g /dev/mapper/vdo_stack(0) [root@virt-540 ~]# lvconvert --yes --type vdo-pool -n vdo_lv -V 2T vdo_sanity/vdo_pool modprobe: FATAL: Module kvdo not found in directory /lib/modules/5.14.0-229.el9.x86_64 /usr/sbin/modprobe failed: 1 vdo: Required device-mapper target(s) not detected in your kernel. [root@virt-540 ~]# echo $? 0 Version-Release number of selected component (if applicable): kernel-5.14.0-229.el9 BUILT: Thu Jan 5 05:38:37 PM CET 2023 kmod-kvdo-8.2.1.3-64.el9_2 BUILT: Thu Dec 15 07:03:12 PM CET 2022 lvm2-2.03.17-4.el9 BUILT: Tue Jan 10 06:40:12 PM CET 2023 lvm2-libs-2.03.17-4.el9 BUILT: Tue Jan 10 06:40:12 PM CET 2023
I don't think this is intended to be redirected to me on account of the module rebuild requirements. I think what Corey is asking for, is that the lvm command handles and reports the error condition better. For example, if I run `lvconvert --yes --type vdo-pool -n vdo_lv -V 2T vdo_sanity/vdo_pool` and the module can't be loaded via modprobe... It should return an error like "Unable to load required kvdo kernel module" rather than a random modprobe failure output. Is that correct, corey?
(In reply to Andy Walsh from comment #1) > I don't think this is intended to be redirected to me on account of the > module rebuild requirements. I think what Corey is asking for, is that the > lvm command handles and reports the error condition better. > > For example, if I run `lvconvert --yes --type vdo-pool -n vdo_lv -V 2T > vdo_sanity/vdo_pool` and the module can't be loaded via modprobe... It > should return an error like "Unable to load required kvdo kernel module" > rather than a random modprobe failure output. > > Is that correct, corey? Exactly. At a minimum, return non zero, better yet, a warning/error like mentioned in comment #1.
Ahh right - I've originally got impression from the title it's been bug about missing kvdo module. Looking closer - yep lvconvert internally misinterpreted error handling path - fixed with this upstream patch: https://listman.redhat.com/archives/lvm-devel/2023-February/024616.html Few more similar issues were also fixed for Pool/RAID and Integrity conversion with this patch. With patch we now correctly return exit code 5.
Marking Verified:Tested in the latest rpms. kernel-5.14.0-312.el9 BUILT: Thu May 11 08:04:19 PM CEST 2023 lvm2-2.03.21-1.el9 BUILT: Fri Apr 21 02:33:33 PM CEST 2023 lvm2-libs-2.03.21-1.el9 BUILT: Fri Apr 21 02:33:33 PM CEST 2023 [root@virt-521 ~]# lvcreate --yes --type vdo -n vdo_lv -L 25G vdo_sanity -V 2T modprobe: FATAL: Module kvdo not found in directory /lib/modules/5.14.0-312.el9.x86_64 /usr/sbin/modprobe failed: 1 vdo: Required device-mapper target(s) not detected in your kernel. Run `lvcreate --help' for more information. [root@virt-521 ~]# echo $? 3
Fixed in the latest build as well. Marking VERIFIED. kernel-5.14.0-312.el9 BUILT: Thu May 11 08:04:19 PM CEST 2023 lvm2-2.03.21-2.el9 BUILT: Thu May 25 12:03:04 AM CEST 2023 lvm2-libs-2.03.21-2.el9 BUILT: Thu May 25 12:03:04 AM CEST 2023 [root@virt-521 ~]# lvcreate --yes --type vdo -n vdo_lv -L 25G vdo_sanity -V 2T modprobe: FATAL: Module kvdo not found in directory /lib/modules/5.14.0-312.el9.x86_64 /usr/sbin/modprobe failed: 1 vdo: Required device-mapper target(s) not detected in your kernel. Run `lvcreate --help' for more information. [root@virt-521 ~]# echo $? 3