Bug 859562 - DM RAID: 'sync' table argument is ineffective.
DM RAID: 'sync' table argument is ineffective.
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Jonathan Earl Brassow
Depends On:
Blocks: 961662
  Show dependency treegraph
Reported: 2012-09-21 17:09 EDT by Jonathan Earl Brassow
Modified: 2015-04-21 18:29 EDT (History)
4 users (show)

See Also:
Fixed In Version: kernel-2.6.32-376.el6
Doc Type: Bug Fix
Doc Text:
A bug in the device-mapper RAID kernel module was preventing the "sync" directive from being honored. The result was that users were unable to force their RAID arrays to undergo a complete resync if desired. This has been fixed and users can use 'lvchange --resync my_vg/my_raid_lv' to force a complete resynchronization on their LVM RAID arrays.
Story Points: ---
Clone Of:
Last Closed: 2013-11-21 08:36:35 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2012-09-21 17:09:55 EDT
There are two table arguments that can be given to a DM RAID target that
control whether the array is forced to (re)synchronize or skip initialization:
"sync" and "nosync".  When "sync" is given, we set mddev->recovery_cp to 0
in order to cause the device to resynchronize.  This is insufficient if there
is a bitmap in use, because the array will simply look at the bitmap and see
that there is no recovery necessary.

This means that a user cannot use the 'sync' directive to cause the array to resync itself.

This is a low priority bug because a user can simply clear the bitmap area by writing zeros to it before assembling the array - which is what LVM does.
Comment 2 RHEL Product and Program Management 2013-05-06 13:20:32 EDT
This request was evaluated by Red Hat Product Management for
inclusion in a Red Hat Enterprise Linux release.  Product
Management has requested further review of this request by
Red Hat Engineering, for potential inclusion in a Red Hat
Enterprise Linux release for currently deployed products.
This request is not yet committed for inclusion in a release.
Comment 3 Jonathan Earl Brassow 2013-05-06 13:40:13 EDT
Two ways to test:

1) Create a RAID LV
~> lvcreate --type raid1 -m 1 -L 1G -n lv vg
2) Wait for it to sync ('lvs' output should read 100% for Cpy%Sync)
3) Issue a resync
`> lvchange --resync vg/lv
4) Ensure the array is properly resyncing ('lvs' will build to 100%)

For a more thorough testing, use device-mapper to replace step 3 above:
# Load a new table with the additional 'sync' argument
[~]# dmsetup table vg-lv | sed s:'3 128':'4 128 sync': | dmsetup load vg-lv

# Suspend and resume to replace the old table with the new
[~]# dmsetup suspend vg-lv
[~]# dmsetup resume vg-lv

# Ensure LV is resyncing
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                                 0.00        
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                                18.74        
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                                37.47        
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                               100.00
Comment 4 Jarod Wilson 2013-05-09 16:35:54 EDT
Comment 12 errata-xmlrpc 2013-11-21 08:36:35 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Comment 13 Jonathan Earl Brassow 2015-04-21 18:29:01 EDT
clearing my needinfo flag for this closed bug

Note You need to log in before you can comment on or make changes to this bug.