RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 837098 - resync doesnt appear to actually resync a raid mirror
Summary: resync doesnt appear to actually resync a raid mirror
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.3
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-07-02 18:50 UTC by Corey Marthaler
Modified: 2013-02-21 08:11 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.98-1.el6
Doc Type: Bug Fix
Doc Text:
Previously, a user-instantiated resync of a RAID logical volume would fail to cause the RAID logical volume to actually perform the resync. This has been corrected and the logical volume now will perform the resync as prescribed.
Clone Of:
Environment:
Last Closed: 2013-02-21 08:11:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Description Corey Marthaler 2012-07-02 18:50:00 UTC
Description of problem:
A resync doesn't appear to actually resync the mirror

[root@hayes-01 bin]# lvchange -an raid_sanity/resync_nosync
[root@hayes-01 bin]# lvs -a -o +devices
  LV                        Attr     LSize  Copy%  Devices
  resync_nosync             Rwi---m-  3.00g        resync_nosync_rimage_0(0),resync_nosync_rimage_1(0)
  [resync_nosync_rimage_0]  Iwi---r-  3.00g        /dev/etherd/e1.1p9(1)
  [resync_nosync_rimage_1]  Iwi---r-  3.00g        /dev/etherd/e1.1p8(1)
  [resync_nosync_rmeta_0]   ewi---r-  4.00m        /dev/etherd/e1.1p9(0)
  [resync_nosync_rmeta_1]   ewi---r-  4.00m        /dev/etherd/e1.1p8(0)

[root@hayes-01 bin]# lvchange --resync -y raid_sanity/resync_nosync
[root@hayes-01 bin]# lvchange -ay raid_sanity/resync_nosync


# After the resync and reactivation, the raid mirror is instantly 100% synced, which is very odd. If this were an lvm mirror, if would take at least 30-60 seconds to get to 100%.
[root@hayes-01 bin]# dmsetup status
raid_sanity-resync_nosync_rmeta_0: 0 8192 linear 
raid_sanity-resync_nosync: 0 6291456 raid raid1 2 AA 6291456/6291456
raid_sanity-resync_nosync_rimage_1: 0 6291456 linear 
raid_sanity-resync_nosync_rimage_0: 0 6291456 linear 
raid_sanity-resync_nosync_rmeta_1: 0 8192 linear 

[root@hayes-01 bin]# lvs -a -o +devices
  LV                        Attr     LSize  Copy%  Devices
  resync_nosync             rwi-a-m-  3.00g 100.00 resync_nosync_rimage_0(0),resync_nosync_rimage_1(0)
  [resync_nosync_rimage_0]  iwi-aor-  3.00g        /dev/etherd/e1.1p9(1)
  [resync_nosync_rimage_1]  iwi-aor-  3.00g        /dev/etherd/e1.1p8(1)
  [resync_nosync_rmeta_0]   ewi-aor-  4.00m        /dev/etherd/e1.1p9(0)
  [resync_nosync_rmeta_1]   ewi-aor-  4.00m        /dev/etherd/e1.1p8(0)

Version-Release number of selected component (if applicable):
2.6.32-278.el6.x86_64
lvm2-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-libs-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-cluster-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
cmirror-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012

Comment 1 RHEL Program Management 2012-07-10 06:01:25 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 2 RHEL Program Management 2012-07-10 23:59:30 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 3 Jonathan Earl Brassow 2012-09-11 18:13:51 UTC
3 (THREE) upstream commits compose the patch for this bug:

commit 4ededc698f32a4cbabaf70bfd3835abab866cbb9
Author: Jonathan Brassow <jbrassow>
Date:   Tue Sep 11 13:09:35 2012 -0500

    RAID:  Properly handle resync of RAID LVs
    
    Issuing a 'lvchange --resync <VG>/<RAID_LV>' had no effect.  This is
    because the code to handle RAID LVs was not present.  This patch adds
    the code that will clear the metadata areas of RAID LVs - causing them
    to resync upon activation.

commit a2d9b1a7e92aab78c80be0e0cbd45fde00d42aa4
Author: Jonathan Brassow <jbrassow>
Date:   Tue Sep 11 13:01:05 2012 -0500

    cleanup:  Restructure code that handles mirror resyncing
    
    When an LV is to be resynced, the metadata areas are cleared and the
    LV is reactivated.  This is true for mirroring and will also be true
    for RAID LVs.  We restructure the code in lvchange_resync() so that we
    keep all the common steps necessary (validation of ability to resync,
    deactivation, activation of meta/log devices, clearing of those devices,
    etc) and place the code that will be divergent in separate functions:
        detach_metadata_devices()
        attach_metadata_devices()
    
    The common steps will be processed on lists of metadata devices.  Before
    RAID capability is added, this will simply be the mirror log device (if
    found).
    
    This patch lays the ground-work for adding resync of RAID LVs.

commit 05131f5853e86419d9c726faa961b8d012298d9c
Author: Jonathan Brassow <jbrassow>
Date:   Tue Sep 11 12:55:17 2012 -0500

    cleanup:  Reduce indentation by short-circuiting function
    
    By changing the conditional for resyncing mirrors with core-logs a
    bit, we can short-circuit the rest of the function for that case
    and reduce the amount of indenting in the rest of the function.
    
    This cleanup will simplify future patches aimed at properly handling
    the resync of RAID LVs.

Comment 4 Jonathan Earl Brassow 2012-09-11 18:17:09 UTC
Unit tests:

[root@hayes-01 ~]# dmsetup status /dev/mapper/vg-raid? vg-raid10 vg-mirror
/dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152
/dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512
/dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512
/dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512
vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152
vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A
[root@hayes-01 ~]# for i in raid1 raid4 raid5 raid6 raid10 mirror; do echo "############## Resyncing vg/$i"; lvchange --resync -y vg/$i; dmsetup status /dev/mapper/vg-raid? vg-raid10 vg-mirror; done
############## Resyncing vg/raid1
/dev/mapper/vg-raid1: 0 2097152 raid raid1 2 aa 0/2097152
/dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512
/dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512
/dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512
vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152
vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A
############## Resyncing vg/raid4
/dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152
/dev/mapper/vg-raid4: 0 2113536 raid raid4 4 aaaa 0/704512
/dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512
/dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512
vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152
vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A
############## Resyncing vg/raid5
/dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152
/dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512
/dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 aaaa 0/704512
/dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512
vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152
vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A
############## Resyncing vg/raid6
/dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152
/dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512
/dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512
/dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 aaaaa 0/704512
vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152
vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A
############## Resyncing vg/raid10
/dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152
/dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512
/dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512
/dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512
vg-raid10: 0 2097152 raid raid10 4 aaaa 0/2097152
vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A
############## Resyncing vg/mirror
/dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152
/dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512
/dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512
/dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512
vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152
vg-mirror: 0 2097152 mirror 2 253:47 253:48 34/2048 1 AA 3 disk 253:46 A

Comment 6 Corey Marthaler 2013-01-07 20:27:02 UTC
Fix verified in the latest rpms.


2.6.32-348.el6.x86_64
lvm2-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
lvm2-libs-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
lvm2-cluster-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
udev-147-2.43.el6    BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
device-mapper-libs-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
device-mapper-event-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
device-mapper-event-libs-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
cmirror-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012



SCENARIO (raid1) - [nosync_raid_resynchronization]
Create a nosync raid and resync it
taft-01: lvcreate --type raid1 -m 1 -n resync_nosync -L 3G --nosync raid_sanity
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
Verifing percent is finished at 100%

Deactivating resync_nosync raid
Resyncing resync_nosync raid
lvchange --resync -y raid_sanity/resync_nosync
lvchange -ay raid_sanity/resync_nosync
Activating resync_nosync raid

Verifying nosync percent 8% is now less than 100%

Deactivating raid resync_nosync... and removing

Comment 7 errata-xmlrpc 2013-02-21 08:11:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.