RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 859562 - DM RAID: 'sync' table argument is ineffective.
Summary: DM RAID: 'sync' table argument is ineffective.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: XiaoNi
URL:
Whiteboard:
Depends On:
Blocks: 961662
TreeView+ depends on / blocked
 
Reported: 2012-09-21 21:09 UTC by Jonathan Earl Brassow
Modified: 2015-04-21 22:29 UTC (History)
4 users (show)

Fixed In Version: kernel-2.6.32-376.el6
Doc Type: Bug Fix
Doc Text:
A bug in the device-mapper RAID kernel module was preventing the "sync" directive from being honored. The result was that users were unable to force their RAID arrays to undergo a complete resync if desired. This has been fixed and users can use 'lvchange --resync my_vg/my_raid_lv' to force a complete resynchronization on their LVM RAID arrays.
Clone Of:
Environment:
Last Closed: 2013-11-21 13:36:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:1645 0 normal SHIPPED_LIVE Important: Red Hat Enterprise Linux 6 kernel update 2013-11-20 22:04:18 UTC

Description Jonathan Earl Brassow 2012-09-21 21:09:55 UTC
There are two table arguments that can be given to a DM RAID target that
control whether the array is forced to (re)synchronize or skip initialization:
"sync" and "nosync".  When "sync" is given, we set mddev->recovery_cp to 0
in order to cause the device to resynchronize.  This is insufficient if there
is a bitmap in use, because the array will simply look at the bitmap and see
that there is no recovery necessary.

This means that a user cannot use the 'sync' directive to cause the array to resync itself.

This is a low priority bug because a user can simply clear the bitmap area by writing zeros to it before assembling the array - which is what LVM does.

Comment 2 RHEL Program Management 2013-05-06 17:20:32 UTC
This request was evaluated by Red Hat Product Management for
inclusion in a Red Hat Enterprise Linux release.  Product
Management has requested further review of this request by
Red Hat Engineering, for potential inclusion in a Red Hat
Enterprise Linux release for currently deployed products.
This request is not yet committed for inclusion in a release.

Comment 3 Jonathan Earl Brassow 2013-05-06 17:40:13 UTC
Two ways to test:

1) Create a RAID LV
~> lvcreate --type raid1 -m 1 -L 1G -n lv vg
2) Wait for it to sync ('lvs' output should read 100% for Cpy%Sync)
3) Issue a resync
`> lvchange --resync vg/lv
4) Ensure the array is properly resyncing ('lvs' will build to 100%)

For a more thorough testing, use device-mapper to replace step 3 above:
# Load a new table with the additional 'sync' argument
[~]# dmsetup table vg-lv | sed s:'3 128':'4 128 sync': | dmsetup load vg-lv

# Suspend and resume to replace the old table with the new
[~]# dmsetup suspend vg-lv
[~]# dmsetup resume vg-lv

# Ensure LV is resyncing
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                                 0.00        
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                                18.74        
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                                37.47        
[~]# lvs vg
  LV   VG   Attr      LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  lv   vg   rwi-a-r-- 5.00g                               100.00

Comment 4 Jarod Wilson 2013-05-09 20:35:54 UTC
Patch(es)

Comment 12 errata-xmlrpc 2013-11-21 13:36:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1645.html

Comment 13 Jonathan Earl Brassow 2015-04-21 22:29:01 UTC
clearing my needinfo flag for this closed bug


Note You need to log in before you can comment on or make changes to this bug.