Red Hat Bugzilla – Bug 969214
LVM RAID: Add ability to throttle sync operations for RAID LVs
Last modified: 2014-06-17 21:18:18 EDT
Add ability to set limits on the background sync I/O operations performed by RAID logical volumes.
Being able to set limits on the maximum bandwidth used by sync operations can keep them from crowding-out nominal I/O. This is especially useful when doing:
- scrubbing operations: because these operations are usually performed with the logical volumes are in-use
- creating several RAID LVs: because the sync I/O can dramatically slow-down the LVM metadata operations if there are several sync threads operating at once.
Testing should include:
1) ensuring that '--maxrecoveryrate <rate>' - found in lvcreate and lvchange - can be used to limit sync I/O so as not to crowd-out nominal I/O
2) that '--minrecoveryrate <rate>' can be used to ensure that sync I/O achieves a minimum throughput even when heavy nominal I/O is present.
N.B. The rates specified cannot be perfectly guaranteed. While the result is "best-effort", the affects should be close.
Patch commited upstream:
Author: Jonathan Brassow <email@example.com>
Date: Fri May 31 11:25:52 2013 -0500
DM RAID: Add ability to throttle sync operations for RAID LVs.
This patch adds the ability to set the minimum and maximum I/O rate for
sync operations in RAID LVs. The options are available for 'lvcreate' and
'lvchange' and are as follows:
--minrecoveryrate <Rate> [bBsSkKmMgG]
--maxrecoveryrate <Rate> [bBsSkKmMgG]
The rate is specified in size/sec/device. If a suffix is not given,
kiB/sec/device is assumed. Setting the rate to 0 removes the preference.
Both lvcreate and lvchange honor --maxrecovery rate settings.
When creating a lot of raid10 volumes, the creation slows down substantially after a few LVs. When 'maxrecoveryrate' is set the creation is almost instant and the synchornisation goes slower.
However there is a same issue as in RHEL 6, after using lvchange on an LV to modify maxrecovery rate the percentage displayed drops (for low percentages all the way to 0.00)
Tested with lvm2-2.02.103-5.el7.x86_64
Marking VERIFIED on RHEL7 with:
lvm2-2.02.105-14.el7 BUILT: Wed Mar 26 14:29:41 CET 2014
lvm2-libs-2.02.105-14.el7 BUILT: Wed Mar 26 14:29:41 CET 2014
lvm2-cluster-2.02.105-14.el7 BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-1.02.84-14.el7 BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-libs-1.02.84-14.el7 BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-event-1.02.84-14.el7 BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-event-libs-1.02.84-14.el7 BUILT: Wed Mar 26 14:29:41 CET 2014
device-mapper-persistent-data-0.3.0-1.el7 BUILT: Fri Mar 28 13:42:24 CET 2014
This request was resolved in Red Hat Enterprise Linux 7.0.
Contact your manager or support representative in case you have further questions about the request.