Bug 1642162
| Summary: | split off cluster mirror images are not being properly activated online | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Clustered Mirror / cmirrord | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | Steven J. Levine <slevine> |
| Severity: | medium | ||
| Priority: | urgent | CC: | agk, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, tborcin, zkabelac |
| Version: | 7.6 | Keywords: | Regression, TestBlocker |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.186-3.el7 | Doc Type: | Bug Fix |
| Doc Text: |
.When an image is split off from an active/active cluster mirror, the resulting logical volume is now properly activated
Previously, when you split off an image from an active/active cluster mirror, the resulting new logical volume appeared active but it had no active component. With this fix, the new logical volume is properly activated.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-03-31 20:04:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2018-10-23 20:03:16 UTC
I just saw this fail again, yet it appears this worked in the previous lvm2 7.7 build, double checking...
3.10.0-1057.el7.x86_64
lvm2-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-libs-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-cluster-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-lockd-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-python-boom-0.9-18.el7 BUILT: Fri Jun 21 04:18:58 CDT 2019
cmirror-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-persistent-data-0.8.5-1.el7 BUILT: Mon Jun 10 03:58:20 CDT 2019
harding-02: pvcreate /dev/mapper/mpatha1 /dev/mapper/mpathb1 /dev/mapper/mpathc1 /dev/mapper/mpathd1 /dev/mapper/mpathe1 /dev/mapper/mpathf1
harding-02: vgcreate split_image /dev/mapper/mpatha1 /dev/mapper/mpathb1 /dev/mapper/mpathc1 /dev/mapper/mpathd1 /dev/mapper/mpathe1 /dev/mapper/mpathf1
============================================================
Iteration 1 of 1 started at Fri Jun 28 14:21:20 CDT 2019
============================================================
SCENARIO - [sequentially_split_off_all_pvs]
Create a mirror with many legs and then sequentially split off each one of the PVs
harding-03: lvcreate --activate y --type mirror -m 4 -n split_pvs_sequentially -L 300M split_image
Waiting until all mirror|raid volumes become fully syncd...
1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec
Sleeping 15 sec
splitting off legs:
/dev/mapper/mpathe1
couldn't find /dev/split_image/new0
[root@harding-03 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
new0 split_image -wi-a----- 300.00m /dev/mapper/mpathe1(0)
split_pvs_sequentially split_image mwi-a-m--- 300.00m [split_pvs_sequentially_mlog] 100.00 split_pvs_sequentially_mimage_0(0),split_pvs_sequentially_mimage_1(0),split_pvs_sequentially_mimage_2(0),split_pvs_sequentially_mimage_3(0)
[split_pvs_sequentially_mimage_0] split_image iwi-aom--- 300.00m /dev/mapper/mpatha1(0)
[split_pvs_sequentially_mimage_1] split_image iwi-aom--- 300.00m /dev/mapper/mpathb1(0)
[split_pvs_sequentially_mimage_2] split_image iwi-aom--- 300.00m /dev/mapper/mpathc1(0)
[split_pvs_sequentially_mimage_3] split_image iwi-aom--- 300.00m /dev/mapper/mpathd1(0)
[split_pvs_sequentially_mlog] split_image lwi-aom--- 4.00m /dev/mapper/mpathf1(0)
[root@harding-03 ~]# ls -l /dev/split_image/new0
ls: cannot access /dev/split_image/new0: No such file or directory
I double checked and this did indeed work in lvm2-2.02.185-1.el7, so something caused this regression in lvm2-2.02.185-2.el7. This feels reminiscent of chasing down bug 1595875 in rhel7.6. seriously, not a blocker. Fixed by: https://www.redhat.com/archives/lvm-devel/2019-October/msg00164.html https://www.redhat.com/archives/lvm-devel/2019-October/msg00165.html Although there are likely way more non-ideal steps during mirror/raid processing that should be upgraded to follow basic write/suspend/commit/resume rule and avoid working with 'in-running-command' stored metadata hints. As these are leading to unresolvable states when command would be killed between/during 'commit-resume' step. brassow: Now that this is in post, should we rewrite the release note/doc text as a "bug fix" for GA? This appears fixed in the latest rpms.
3.10.0-1109.el7.x86_64
lvm2-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
lvm2-libs-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
lvm2-cluster-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
lvm2-lockd-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
lvm2-python-boom-0.9-20.el7 BUILT: Tue Sep 24 06:18:20 CDT 2019
cmirror-2.02.186-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
device-mapper-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
device-mapper-libs-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
device-mapper-event-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
device-mapper-event-libs-1.02.164-3.el7 BUILT: Fri Nov 8 07:07:01 CST 2019
device-mapper-persistent-data-0.8.5-1.el7 BUILT: Mon Jun 10 03:58:20 CDT 2019
Cluster name: HARDING
Stack: corosync
Current DC: harding-03 (version 1.1.21-2.el7-f14e36fd43) - partition with quorum
Last updated: Fri Nov 8 12:03:24 2019
Last change: Fri Nov 8 10:50:00 2019 by root via cibadmin on harding-02
2 nodes configured
5 resources configured
Online: [ harding-02 harding-03 ]
Full list of resources:
smoke-apc (stonith:fence_apc): Started harding-02
Clone Set: dlm-clone [dlm]
Started: [ harding-02 harding-03 ]
Clone Set: clvmd-clone [clvmd]
Started: [ harding-02 harding-03 ]
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
============================================================
Iteration 12 of 12 started at Fri Nov 8 11:52:36 CST 2019
============================================================
SCENARIO - [sequentially_split_off_all_images]
Create a mirror with many legs and then sequentially split off each one of the images
harding-03: lvcreate --activate y --type mirror -m 4 -n split_images_sequentially -L 300M split_image
Waiting until all mirror|raid volumes become fully syncd...
1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec
Sleeping 15 sec
splitting off legs...
Deactivating LV split_image/new0 on harding-02... and removing
Deactivating LV split_image/new1 on harding-02... and removing
Deactivating LV split_image/new2 on harding-02... and removing
Deactivating LV split_image/new3 on harding-02... and removing
Deactivating LV split_image/split_images_sequentially on harding-02... and removing
(In reply to Steven J. Levine from comment #12) > brassow: > > Now that this is in post, should we rewrite the release note/doc text as a > "bug fix" for GA? yes please. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1129 |