RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 969214 - LVM RAID: Add ability to throttle sync operations for RAID LVs
Summary: LVM RAID: Add ability to throttle sync operations for RAID LVs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-30 22:11 UTC by Jonathan Earl Brassow
Modified: 2021-09-08 18:55 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-13 10:48:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jonathan Earl Brassow 2013-05-30 22:11:01 UTC
Add ability to set limits on the background sync I/O operations performed by RAID logical volumes.

Being able to set limits on the maximum bandwidth used by sync operations can keep them from crowding-out nominal I/O.  This is especially useful when doing:
- scrubbing operations: because these operations are usually performed with the logical volumes are in-use
- creating several RAID LVs: because the sync I/O can dramatically slow-down the LVM metadata operations if there are several sync threads operating at once.

Comment 1 Jonathan Earl Brassow 2013-05-30 22:17:13 UTC
Testing should include:
1) ensuring that '--maxrecoveryrate <rate>' - found in lvcreate and lvchange - can be used to limit sync I/O so as not to crowd-out nominal I/O
2) that '--minrecoveryrate <rate>' can be used to ensure that sync I/O achieves a minimum throughput even when heavy nominal I/O is present.

N.B. The rates specified cannot be perfectly guaranteed.  While the result is "best-effort", the affects should be close.

Comment 2 Jonathan Earl Brassow 2013-05-31 16:31:20 UTC
Patch commited upstream:

commit 562c678ee23e76b675a8f4682bd6d2447d1d0de7
Author: Jonathan Brassow <jbrassow>
Date:   Fri May 31 11:25:52 2013 -0500

    DM RAID:  Add ability to throttle sync operations for RAID LVs.
    
    This patch adds the ability to set the minimum and maximum I/O rate for
    sync operations in RAID LVs.  The options are available for 'lvcreate' and
    'lvchange' and are as follows:
      --minrecoveryrate <Rate> [bBsSkKmMgG]
      --maxrecoveryrate <Rate> [bBsSkKmMgG]
    The rate is specified in size/sec/device.  If a suffix is not given,
    kiB/sec/device is assumed.  Setting the rate to 0 removes the preference.

Comment 4 Nenad Peric 2013-11-26 19:30:16 UTC
Both lvcreate and lvchange honor --maxrecovery rate settings. 
When creating a lot of raid10 volumes, the creation slows down substantially after a few LVs. When 'maxrecoveryrate' is set the creation is almost instant and the synchornisation goes slower.
However there is a same issue as in RHEL 6, after using lvchange on an LV to modify maxrecovery rate the percentage displayed drops (for low percentages all the way to 0.00)

Tested with lvm2-2.02.103-5.el7.x86_64

Comment 5 Nenad Peric 2014-04-14 09:09:16 UTC
Marking VERIFIED on RHEL7 with:

lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-persistent-data-0.3.0-1.el7    BUILT: Fri Mar 28 13:42:24 CET 2014

Comment 6 Ludek Smid 2014-06-13 10:48:56 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.