RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1642162 - split off cluster mirror images are not being properly activated online
Summary: split off cluster mirror images are not being properly activated online
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.6
Hardware: x86_64
OS: Linux
urgent
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-23 20:03 UTC by Corey Marthaler
Modified: 2021-09-03 12:51 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.186-3.el7
Doc Type: Bug Fix
Doc Text:
.When an image is split off from an active/active cluster mirror, the resulting logical volume is now properly activated Previously, when you split off an image from an active/active cluster mirror, the resulting new logical volume appeared active but it had no active component. With this fix, the new logical volume is properly activated.
Clone Of:
Environment:
Last Closed: 2020-03-31 20:04:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:1129 0 None None None 2020-03-31 20:05:27 UTC

Description Corey Marthaler 2018-10-23 20:03:16 UTC
Description of problem:
This issue was described in the verification of 7.6.z bug 1638084: https://bugzilla.redhat.com/show_bug.cgi?id=1638084#c5 but wasn't documented in an actual bug report for our tests to reference yet. It's similar to the regression seen in rhel7.6 bug 1595875, however now only affects clustered mirror image splitting, not single machine splitting. 


SCENARIO - [sequentially_split_off_all_images]
Create a mirror with many legs and then sequentially split off each one of the images
mckinley-03: lvcreate --activate y --type mirror -m 4 -n split_images_sequentially -L 300M split_image

Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

[root@mckinley-03 ~]# lvs -a -o +devices
  LV                                   VG           Attr       LSize    Log                              Cpy%Sync Devices
  split_images_sequentially            split_image  mwi-a-m--- 300.00m  [split_images_sequentially_mlog] 100.00   split_images_sequentially_mimage_0(0),split_images_sequentially_mimage_1(0),split_images_sequentially_mimage_2(0),split_images_sequentially_mimage_3(0),split_images_sequentially_mimage_4(0)
  [split_images_sequentially_mimage_0] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpatha1(0)
  [split_images_sequentially_mimage_1] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpatha2(0)
  [split_images_sequentially_mimage_2] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpathb1(0)
  [split_images_sequentially_mimage_3] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpathb2(0)
  [split_images_sequentially_mimage_4] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpathc1(0)
  [split_images_sequentially_mlog]     split_image  lwi-aom---   4.00m                                            /dev/mapper/mpathe2(0)

splitting off legs...
mckinley-03: lvconvert --yes --splitmirrors 1 --name new0 split_image/split_images_sequentially

# Supposedly the new split image is active on all nodes in the cluster, yet there's no dm device associated with it
[root@mckinley-03 ~]# lvs -a -o +devices
  LV                                   VG           Attr       LSize    Log                              Cpy%Sync Devices
  new0                                 split_image  -wi-a----- 300.00m                                            /dev/mapper/mpathc1(0)
  split_images_sequentially            split_image  mwi-a-m--- 300.00m  [split_images_sequentially_mlog] 100.00   split_images_sequentially_mimage_0(0),split_images_sequentially_mimage_1(0),split_images_sequentially_mimage_2(0),split_images_sequentially_mimage_3(0)
  [split_images_sequentially_mimage_0] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpatha1(0)
  [split_images_sequentially_mimage_1] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpatha2(0)
  [split_images_sequentially_mimage_2] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpathb1(0)
  [split_images_sequentially_mimage_3] split_image  iwi-aom--- 300.00m                                            /dev/mapper/mpathb2(0)
  [split_images_sequentially_mlog]     split_image  lwi-aom---   4.00m                                            /dev/mapper/mpathe2(0)

# No "new0" device exists
[root@mckinley-03 ~]# ls /dev/split_image/new0
ls: cannot access /dev/split_image/new0: No such file or directory

# No "new0" device exists
[root@mckinley-03 ~]# dmsetup status | grep split
split_image-split_images_sequentially_mimage_0: 0 614400 linear 
split_image-split_images_sequentially_mlog: 0 8192 linear 
split_image-split_images_sequentially: 0 614400 mirror 4 253:28 253:29 253:30 253:31 150/150 1 AAAA 3 clustered-disk 253:27 A
split_image-split_images_sequentially_mimage_4: 0 614400 linear 
split_image-split_images_sequentially_mimage_3: 0 614400 linear 
split_image-split_images_sequentially_mimage_2: 0 614400 linear 
split_image-split_images_sequentially_mimage_1: 0 614400 linear 



Version-Release number of selected component (if applicable):
3.10.0-957.el7.x86_64

lvm2-2.02.180-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
lvm2-libs-2.02.180-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
lvm2-cluster-2.02.180-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
lvm2-lockd-2.02.180-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
lvm2-python-boom-0.9-11.el7    BUILT: Mon Sep 10 04:49:22 CDT 2018
cmirror-2.02.180-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
device-mapper-1.02.149-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
device-mapper-libs-1.02.149-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
device-mapper-event-1.02.149-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
device-mapper-event-libs-1.02.149-10.el7_6.1    BUILT: Wed Oct 10 12:43:42 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017


How reproducible:
Everytime

Comment 2 Corey Marthaler 2019-06-28 19:31:42 UTC
I just saw this fail again, yet it appears this worked in the previous lvm2 7.7 build, double checking...


3.10.0-1057.el7.x86_64

lvm2-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-libs-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-cluster-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-lockd-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-python-boom-0.9-18.el7    BUILT: Fri Jun 21 04:18:58 CDT 2019
cmirror-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019



harding-02: pvcreate /dev/mapper/mpatha1 /dev/mapper/mpathb1 /dev/mapper/mpathc1 /dev/mapper/mpathd1 /dev/mapper/mpathe1 /dev/mapper/mpathf1
harding-02: vgcreate   split_image /dev/mapper/mpatha1 /dev/mapper/mpathb1 /dev/mapper/mpathc1 /dev/mapper/mpathd1 /dev/mapper/mpathe1 /dev/mapper/mpathf1

============================================================
Iteration 1 of 1 started at Fri Jun 28 14:21:20 CDT 2019
============================================================
SCENARIO - [sequentially_split_off_all_pvs]
Create a mirror with many legs and then sequentially split off each one of the PVs
harding-03: lvcreate --activate y --type mirror -m 4 -n split_pvs_sequentially -L 300M split_image
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec
Sleeping 15 sec

splitting off legs:
         /dev/mapper/mpathe1
couldn't find /dev/split_image/new0

[root@harding-03 ~]# lvs -a -o +devices
  LV                                VG              Attr       LSize    Pool Origin Data%  Meta%  Move Log                           Cpy%Sync Convert Devices                                                                                                                                    
  new0                              split_image     -wi-a-----  300.00m                                                                               /dev/mapper/mpathe1(0)                                                                                                                     
  split_pvs_sequentially            split_image     mwi-a-m---  300.00m                                [split_pvs_sequentially_mlog] 100.00           split_pvs_sequentially_mimage_0(0),split_pvs_sequentially_mimage_1(0),split_pvs_sequentially_mimage_2(0),split_pvs_sequentially_mimage_3(0)
  [split_pvs_sequentially_mimage_0] split_image     iwi-aom---  300.00m                                                                               /dev/mapper/mpatha1(0)                                                                                                                     
  [split_pvs_sequentially_mimage_1] split_image     iwi-aom---  300.00m                                                                               /dev/mapper/mpathb1(0)                                                                                                                     
  [split_pvs_sequentially_mimage_2] split_image     iwi-aom---  300.00m                                                                               /dev/mapper/mpathc1(0)                                                                                                                     
  [split_pvs_sequentially_mimage_3] split_image     iwi-aom---  300.00m                                                                               /dev/mapper/mpathd1(0)                                                                                                                     
  [split_pvs_sequentially_mlog]     split_image     lwi-aom---    4.00m                                                                               /dev/mapper/mpathf1(0)                                                                                                                     
[root@harding-03 ~]# ls -l /dev/split_image/new0
ls: cannot access /dev/split_image/new0: No such file or directory

Comment 3 Corey Marthaler 2019-07-08 19:25:24 UTC
I double checked and this did indeed work in lvm2-2.02.185-1.el7, so something caused this regression in lvm2-2.02.185-2.el7.

This feels reminiscent of chasing down bug 1595875 in rhel7.6.

Comment 8 Jonathan Earl Brassow 2019-09-19 19:23:48 UTC
seriously, not a blocker.

Comment 11 Zdenek Kabelac 2019-10-31 17:33:31 UTC
Fixed by:

https://www.redhat.com/archives/lvm-devel/2019-October/msg00164.html
https://www.redhat.com/archives/lvm-devel/2019-October/msg00165.html

Although there are likely way more non-ideal steps during mirror/raid processing that should be upgraded to follow basic
write/suspend/commit/resume rule and avoid working with 'in-running-command' stored metadata hints.
As these are leading to unresolvable states when command would be killed between/during  'commit-resume' step.

Comment 12 Steven J. Levine 2019-11-01 18:04:39 UTC
brassow:

Now that this is in post, should we rewrite the release note/doc text as a "bug fix" for GA?

Comment 15 Corey Marthaler 2019-11-08 18:05:42 UTC
This appears fixed in the latest rpms.

3.10.0-1109.el7.x86_64

lvm2-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-libs-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-cluster-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-lockd-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-python-boom-0.9-20.el7    BUILT: Tue Sep 24 06:18:20 CDT 2019
cmirror-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019

Cluster name: HARDING
Stack: corosync
Current DC: harding-03 (version 1.1.21-2.el7-f14e36fd43) - partition with quorum
Last updated: Fri Nov  8 12:03:24 2019
Last change: Fri Nov  8 10:50:00 2019 by root via cibadmin on harding-02

2 nodes configured
5 resources configured

Online: [ harding-02 harding-03 ]

Full list of resources:

 smoke-apc      (stonith:fence_apc):    Started harding-02
 Clone Set: dlm-clone [dlm]
     Started: [ harding-02 harding-03 ]
 Clone Set: clvmd-clone [clvmd]
     Started: [ harding-02 harding-03 ]

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled


============================================================
Iteration 12 of 12 started at Fri Nov  8 11:52:36 CST 2019
============================================================
SCENARIO - [sequentially_split_off_all_images]
Create a mirror with many legs and then sequentially split off each one of the images
harding-03: lvcreate --activate y --type mirror -m 4 -n split_images_sequentially -L 300M split_image
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec
Sleeping 15 sec

splitting off legs...

Deactivating LV split_image/new0 on harding-02... and removing
Deactivating LV split_image/new1 on harding-02... and removing
Deactivating LV split_image/new2 on harding-02... and removing
Deactivating LV split_image/new3 on harding-02... and removing
Deactivating LV split_image/split_images_sequentially on harding-02... and removing

Comment 16 Jonathan Earl Brassow 2020-01-02 18:02:12 UTC
(In reply to Steven J. Levine from comment #12)
> brassow:
> 
> Now that this is in post, should we rewrite the release note/doc text as a
> "bug fix" for GA?

yes please.

Comment 23 errata-xmlrpc 2020-03-31 20:04:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1129


Note You need to log in before you can comment on or make changes to this bug.