Bug 969214 - LVM RAID: Add ability to throttle sync operations for RAID LVs
LVM RAID: Add ability to throttle sync operations for RAID LVs
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.0
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-30 18:11 EDT by Jonathan Earl Brassow
Modified: 2014-06-17 21:18 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-06-13 06:48:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2013-05-30 18:11:01 EDT
Add ability to set limits on the background sync I/O operations performed by RAID logical volumes.

Being able to set limits on the maximum bandwidth used by sync operations can keep them from crowding-out nominal I/O.  This is especially useful when doing:
- scrubbing operations: because these operations are usually performed with the logical volumes are in-use
- creating several RAID LVs: because the sync I/O can dramatically slow-down the LVM metadata operations if there are several sync threads operating at once.
Comment 1 Jonathan Earl Brassow 2013-05-30 18:17:13 EDT
Testing should include:
1) ensuring that '--maxrecoveryrate <rate>' - found in lvcreate and lvchange - can be used to limit sync I/O so as not to crowd-out nominal I/O
2) that '--minrecoveryrate <rate>' can be used to ensure that sync I/O achieves a minimum throughput even when heavy nominal I/O is present.

N.B. The rates specified cannot be perfectly guaranteed.  While the result is "best-effort", the affects should be close.
Comment 2 Jonathan Earl Brassow 2013-05-31 12:31:20 EDT
Patch commited upstream:

commit 562c678ee23e76b675a8f4682bd6d2447d1d0de7
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Fri May 31 11:25:52 2013 -0500

    DM RAID:  Add ability to throttle sync operations for RAID LVs.
    
    This patch adds the ability to set the minimum and maximum I/O rate for
    sync operations in RAID LVs.  The options are available for 'lvcreate' and
    'lvchange' and are as follows:
      --minrecoveryrate <Rate> [bBsSkKmMgG]
      --maxrecoveryrate <Rate> [bBsSkKmMgG]
    The rate is specified in size/sec/device.  If a suffix is not given,
    kiB/sec/device is assumed.  Setting the rate to 0 removes the preference.
Comment 4 Nenad Peric 2013-11-26 14:30:16 EST
Both lvcreate and lvchange honor --maxrecovery rate settings. 
When creating a lot of raid10 volumes, the creation slows down substantially after a few LVs. When 'maxrecoveryrate' is set the creation is almost instant and the synchornisation goes slower.
However there is a same issue as in RHEL 6, after using lvchange on an LV to modify maxrecovery rate the percentage displayed drops (for low percentages all the way to 0.00)

Tested with lvm2-2.02.103-5.el7.x86_64
Comment 5 Nenad Peric 2014-04-14 05:09:16 EDT
Marking VERIFIED on RHEL7 with:

lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-persistent-data-0.3.0-1.el7    BUILT: Fri Mar 28 13:42:24 CET 2014
Comment 6 Ludek Smid 2014-06-13 06:48:56 EDT
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.

Note You need to log in before you can comment on or make changes to this bug.