Hide Forgot
Description of problem: This issue was described in the verification of 7.6.z bug 1638084: https://bugzilla.redhat.com/show_bug.cgi?id=1638084#c5 but wasn't documented in an actual bug report for our tests to reference yet. It's similar to the regression seen in rhel7.6 bug 1595875, however now only affects clustered mirror image splitting, not single machine splitting. SCENARIO - [sequentially_split_off_all_images] Create a mirror with many legs and then sequentially split off each one of the images mckinley-03: lvcreate --activate y --type mirror -m 4 -n split_images_sequentially -L 300M split_image Waiting until all mirror|raid volumes become fully syncd... 1/1 mirror(s) are fully synced: ( 100.00% ) [root@mckinley-03 ~]# lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices split_images_sequentially split_image mwi-a-m--- 300.00m [split_images_sequentially_mlog] 100.00 split_images_sequentially_mimage_0(0),split_images_sequentially_mimage_1(0),split_images_sequentially_mimage_2(0),split_images_sequentially_mimage_3(0),split_images_sequentially_mimage_4(0) [split_images_sequentially_mimage_0] split_image iwi-aom--- 300.00m /dev/mapper/mpatha1(0) [split_images_sequentially_mimage_1] split_image iwi-aom--- 300.00m /dev/mapper/mpatha2(0) [split_images_sequentially_mimage_2] split_image iwi-aom--- 300.00m /dev/mapper/mpathb1(0) [split_images_sequentially_mimage_3] split_image iwi-aom--- 300.00m /dev/mapper/mpathb2(0) [split_images_sequentially_mimage_4] split_image iwi-aom--- 300.00m /dev/mapper/mpathc1(0) [split_images_sequentially_mlog] split_image lwi-aom--- 4.00m /dev/mapper/mpathe2(0) splitting off legs... mckinley-03: lvconvert --yes --splitmirrors 1 --name new0 split_image/split_images_sequentially # Supposedly the new split image is active on all nodes in the cluster, yet there's no dm device associated with it [root@mckinley-03 ~]# lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices new0 split_image -wi-a----- 300.00m /dev/mapper/mpathc1(0) split_images_sequentially split_image mwi-a-m--- 300.00m [split_images_sequentially_mlog] 100.00 split_images_sequentially_mimage_0(0),split_images_sequentially_mimage_1(0),split_images_sequentially_mimage_2(0),split_images_sequentially_mimage_3(0) [split_images_sequentially_mimage_0] split_image iwi-aom--- 300.00m /dev/mapper/mpatha1(0) [split_images_sequentially_mimage_1] split_image iwi-aom--- 300.00m /dev/mapper/mpatha2(0) [split_images_sequentially_mimage_2] split_image iwi-aom--- 300.00m /dev/mapper/mpathb1(0) [split_images_sequentially_mimage_3] split_image iwi-aom--- 300.00m /dev/mapper/mpathb2(0) [split_images_sequentially_mlog] split_image lwi-aom--- 4.00m /dev/mapper/mpathe2(0) # No "new0" device exists [root@mckinley-03 ~]# ls /dev/split_image/new0 ls: cannot access /dev/split_image/new0: No such file or directory # No "new0" device exists [root@mckinley-03 ~]# dmsetup status | grep split split_image-split_images_sequentially_mimage_0: 0 614400 linear split_image-split_images_sequentially_mlog: 0 8192 linear split_image-split_images_sequentially: 0 614400 mirror 4 253:28 253:29 253:30 253:31 150/150 1 AAAA 3 clustered-disk 253:27 A split_image-split_images_sequentially_mimage_4: 0 614400 linear split_image-split_images_sequentially_mimage_3: 0 614400 linear split_image-split_images_sequentially_mimage_2: 0 614400 linear split_image-split_images_sequentially_mimage_1: 0 614400 linear Version-Release number of selected component (if applicable): 3.10.0-957.el7.x86_64 lvm2-2.02.180-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 lvm2-libs-2.02.180-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 lvm2-cluster-2.02.180-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 lvm2-lockd-2.02.180-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 lvm2-python-boom-0.9-11.el7 BUILT: Mon Sep 10 04:49:22 CDT 2018 cmirror-2.02.180-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 device-mapper-1.02.149-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 device-mapper-libs-1.02.149-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 device-mapper-event-1.02.149-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 device-mapper-event-libs-1.02.149-10.el7_6.1 BUILT: Wed Oct 10 12:43:42 CDT 2018 device-mapper-persistent-data-0.7.3-3.el7 BUILT: Tue Nov 14 05:07:18 CST 2017 How reproducible: Everytime
I just saw this fail again, yet it appears this worked in the previous lvm2 7.7 build, double checking... 3.10.0-1057.el7.x86_64 lvm2-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-libs-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-cluster-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-lockd-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-python-boom-0.9-18.el7 BUILT: Fri Jun 21 04:18:58 CDT 2019 cmirror-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-event-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-event-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-persistent-data-0.8.5-1.el7 BUILT: Mon Jun 10 03:58:20 CDT 2019 harding-02: pvcreate /dev/mapper/mpatha1 /dev/mapper/mpathb1 /dev/mapper/mpathc1 /dev/mapper/mpathd1 /dev/mapper/mpathe1 /dev/mapper/mpathf1 harding-02: vgcreate split_image /dev/mapper/mpatha1 /dev/mapper/mpathb1 /dev/mapper/mpathc1 /dev/mapper/mpathd1 /dev/mapper/mpathe1 /dev/mapper/mpathf1 ============================================================ Iteration 1 of 1 started at Fri Jun 28 14:21:20 CDT 2019 ============================================================ SCENARIO - [sequentially_split_off_all_pvs] Create a mirror with many legs and then sequentially split off each one of the PVs harding-03: lvcreate --activate y --type mirror -m 4 -n split_pvs_sequentially -L 300M split_image Waiting until all mirror|raid volumes become fully syncd... 1/1 mirror(s) are fully synced: ( 100.00% ) Sleeping 15 sec Sleeping 15 sec splitting off legs: /dev/mapper/mpathe1 couldn't find /dev/split_image/new0 [root@harding-03 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices new0 split_image -wi-a----- 300.00m /dev/mapper/mpathe1(0) split_pvs_sequentially split_image mwi-a-m--- 300.00m [split_pvs_sequentially_mlog] 100.00 split_pvs_sequentially_mimage_0(0),split_pvs_sequentially_mimage_1(0),split_pvs_sequentially_mimage_2(0),split_pvs_sequentially_mimage_3(0) [split_pvs_sequentially_mimage_0] split_image iwi-aom--- 300.00m /dev/mapper/mpatha1(0) [split_pvs_sequentially_mimage_1] split_image iwi-aom--- 300.00m /dev/mapper/mpathb1(0) [split_pvs_sequentially_mimage_2] split_image iwi-aom--- 300.00m /dev/mapper/mpathc1(0) [split_pvs_sequentially_mimage_3] split_image iwi-aom--- 300.00m /dev/mapper/mpathd1(0) [split_pvs_sequentially_mlog] split_image lwi-aom--- 4.00m /dev/mapper/mpathf1(0) [root@harding-03 ~]# ls -l /dev/split_image/new0 ls: cannot access /dev/split_image/new0: No such file or directory
I double checked and this did indeed work in lvm2-2.02.185-1.el7, so something caused this regression in lvm2-2.02.185-2.el7. This feels reminiscent of chasing down bug 1595875 in rhel7.6.
seriously, not a blocker.
Fixed by: https://www.redhat.com/archives/lvm-devel/2019-October/msg00164.html https://www.redhat.com/archives/lvm-devel/2019-October/msg00165.html Although there are likely way more non-ideal steps during mirror/raid processing that should be upgraded to follow basic write/suspend/commit/resume rule and avoid working with 'in-running-command' stored metadata hints. As these are leading to unresolvable states when command would be killed between/during 'commit-resume' step.
brassow: Now that this is in post, should we rewrite the release note/doc text as a "bug fix" for GA?
This appears fixed in the latest rpms. 3.10.0-1109.el7.x86_64 lvm2-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 lvm2-libs-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 lvm2-cluster-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 lvm2-lockd-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 lvm2-python-boom-0.9-20.el7 BUILT: Tue Sep 24 06:18:20 CDT 2019 cmirror-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 device-mapper-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 device-mapper-libs-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 device-mapper-event-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 device-mapper-event-libs-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019 device-mapper-persistent-data-0.8.5-1.el7 BUILT: Mon Jun 10 03:58:20 CDT 2019 Cluster name: HARDING Stack: corosync Current DC: harding-03 (version 1.1.21-2.el7-f14e36fd43) - partition with quorum Last updated: Fri Nov 8 12:03:24 2019 Last change: Fri Nov 8 10:50:00 2019 by root via cibadmin on harding-02 2 nodes configured 5 resources configured Online: [ harding-02 harding-03 ] Full list of resources: smoke-apc (stonith:fence_apc): Started harding-02 Clone Set: dlm-clone [dlm] Started: [ harding-02 harding-03 ] Clone Set: clvmd-clone [clvmd] Started: [ harding-02 harding-03 ] Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ============================================================ Iteration 12 of 12 started at Fri Nov 8 11:52:36 CST 2019 ============================================================ SCENARIO - [sequentially_split_off_all_images] Create a mirror with many legs and then sequentially split off each one of the images harding-03: lvcreate --activate y --type mirror -m 4 -n split_images_sequentially -L 300M split_image Waiting until all mirror|raid volumes become fully syncd... 1/1 mirror(s) are fully synced: ( 100.00% ) Sleeping 15 sec Sleeping 15 sec splitting off legs... Deactivating LV split_image/new0 on harding-02... and removing Deactivating LV split_image/new1 on harding-02... and removing Deactivating LV split_image/new2 on harding-02... and removing Deactivating LV split_image/new3 on harding-02... and removing Deactivating LV split_image/split_images_sequentially on harding-02... and removing
(In reply to Steven J. Levine from comment #12) > brassow: > > Now that this is in post, should we rewrite the release note/doc text as a > "bug fix" for GA? yes please.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1129