Bug 837098
Summary: | resync doesnt appear to actually resync a raid mirror | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> |
Component: | lvm2 | Assignee: | Jonathan Earl Brassow <jbrassow> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 6.3 | CC: | agk, benscott, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.98-1.el6 | Doc Type: | Bug Fix |
Doc Text: |
Previously, a user-instantiated resync of a RAID logical volume would fail to cause the RAID logical volume to actually perform the resync. This has been corrected and the logical volume now will perform the resync as prescribed.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2013-02-21 08:11:13 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Corey Marthaler
2012-07-02 18:50:00 UTC
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux. This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4. 3 (THREE) upstream commits compose the patch for this bug: commit 4ededc698f32a4cbabaf70bfd3835abab866cbb9 Author: Jonathan Brassow <jbrassow> Date: Tue Sep 11 13:09:35 2012 -0500 RAID: Properly handle resync of RAID LVs Issuing a 'lvchange --resync <VG>/<RAID_LV>' had no effect. This is because the code to handle RAID LVs was not present. This patch adds the code that will clear the metadata areas of RAID LVs - causing them to resync upon activation. commit a2d9b1a7e92aab78c80be0e0cbd45fde00d42aa4 Author: Jonathan Brassow <jbrassow> Date: Tue Sep 11 13:01:05 2012 -0500 cleanup: Restructure code that handles mirror resyncing When an LV is to be resynced, the metadata areas are cleared and the LV is reactivated. This is true for mirroring and will also be true for RAID LVs. We restructure the code in lvchange_resync() so that we keep all the common steps necessary (validation of ability to resync, deactivation, activation of meta/log devices, clearing of those devices, etc) and place the code that will be divergent in separate functions: detach_metadata_devices() attach_metadata_devices() The common steps will be processed on lists of metadata devices. Before RAID capability is added, this will simply be the mirror log device (if found). This patch lays the ground-work for adding resync of RAID LVs. commit 05131f5853e86419d9c726faa961b8d012298d9c Author: Jonathan Brassow <jbrassow> Date: Tue Sep 11 12:55:17 2012 -0500 cleanup: Reduce indentation by short-circuiting function By changing the conditional for resyncing mirrors with core-logs a bit, we can short-circuit the rest of the function for that case and reduce the amount of indenting in the rest of the function. This cleanup will simplify future patches aimed at properly handling the resync of RAID LVs. Unit tests: [root@hayes-01 ~]# dmsetup status /dev/mapper/vg-raid? vg-raid10 vg-mirror /dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152 /dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512 /dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512 /dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512 vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152 vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A [root@hayes-01 ~]# for i in raid1 raid4 raid5 raid6 raid10 mirror; do echo "############## Resyncing vg/$i"; lvchange --resync -y vg/$i; dmsetup status /dev/mapper/vg-raid? vg-raid10 vg-mirror; done ############## Resyncing vg/raid1 /dev/mapper/vg-raid1: 0 2097152 raid raid1 2 aa 0/2097152 /dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512 /dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512 /dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512 vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152 vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A ############## Resyncing vg/raid4 /dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152 /dev/mapper/vg-raid4: 0 2113536 raid raid4 4 aaaa 0/704512 /dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512 /dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512 vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152 vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A ############## Resyncing vg/raid5 /dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152 /dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512 /dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 aaaa 0/704512 /dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512 vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152 vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A ############## Resyncing vg/raid6 /dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152 /dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512 /dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512 /dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 aaaaa 0/704512 vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152 vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A ############## Resyncing vg/raid10 /dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152 /dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512 /dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512 /dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512 vg-raid10: 0 2097152 raid raid10 4 aaaa 0/2097152 vg-mirror: 0 2097152 mirror 2 253:47 253:48 2048/2048 1 AA 3 disk 253:46 A ############## Resyncing vg/mirror /dev/mapper/vg-raid1: 0 2097152 raid raid1 2 AA 2097152/2097152 /dev/mapper/vg-raid4: 0 2113536 raid raid4 4 AAAA 704512/704512 /dev/mapper/vg-raid5: 0 2113536 raid raid5_ls 4 AAAA 704512/704512 /dev/mapper/vg-raid6: 0 2113536 raid raid6_zr 5 AAAAA 704512/704512 vg-raid10: 0 2097152 raid raid10 4 AAAA 2097152/2097152 vg-mirror: 0 2097152 mirror 2 253:47 253:48 34/2048 1 AA 3 disk 253:46 A Fix verified in the latest rpms. 2.6.32-348.el6.x86_64 lvm2-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 lvm2-libs-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 lvm2-cluster-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 udev-147-2.43.el6 BUILT: Thu Oct 11 05:59:38 CDT 2012 device-mapper-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 device-mapper-libs-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 device-mapper-event-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 device-mapper-event-libs-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 cmirror-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 SCENARIO (raid1) - [nosync_raid_resynchronization] Create a nosync raid and resync it taft-01: lvcreate --type raid1 -m 1 -n resync_nosync -L 3G --nosync raid_sanity WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Verifing percent is finished at 100% Deactivating resync_nosync raid Resyncing resync_nosync raid lvchange --resync -y raid_sanity/resync_nosync lvchange -ay raid_sanity/resync_nosync Activating resync_nosync raid Verifying nosync percent 8% is now less than 100% Deactivating raid resync_nosync... and removing Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0501.html |