Bug 995628
Summary: | LVM RAID: Unable to change recovery rate via 'lvchange' | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Jonathan Earl Brassow <jbrassow> |
Component: | lvm2 | Assignee: | Jonathan Earl Brassow <jbrassow> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 6.5 | CC: | agk, cmarthal, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.100-1.el6 | Doc Type: | Bug Fix |
Doc Text: |
New feature in RHEL6.5 - no need to document it not working.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2013-11-21 23:26:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jonathan Earl Brassow
2013-08-09 21:49:51 UTC
Fix committed upstream: commit 8615234c0fa331852a11e1bf595bf1d4b858f4bc Author: Jonathan Brassow <jbrassow> Date: Fri Aug 9 17:09:47 2013 -0500 RAID: Fix bug making lvchange unable to change recovery rate for RAID 1) Since the min|maxrecoveryrate args are size_kb_ARGs and they are recorded (and sent to the kernel) in terms of kB/sec/disk, we must back out the factor multiple done by size_kb_arg. This is already performed by 'lvcreate' for these arguments. 2) Allow all RAID types, not just RAID1, to change these values. 3) Add min|maxrecoveryrate_ARG to the list of 'update_partial_unsafe' commands so that lvchange will not complain about needing at least one of a certain set of arguments and failing. 4) Add tests that check that these values can be set via lvchange and lvcreate and that 'lvs' reports back the proper results. Upstream commit failed to include the code changes. It only included the unit tests... Plz also grab the following commit: commit bb457adbb6d625a5f942faf8498bb5ee87645ec3 Author: Jonathan Brassow <jbrassow> Date: Mon Aug 12 12:40:52 2013 -0500 RAID: Fix bug making lvchange unable to change recovery rate for RAID Commit ID 8615234c0fa331852a11e1bf595bf1d4b858f4bc failed to include the actual code changes that were made to fix the bug. Instead, all tests went in to validate the bug fix. This patch adds the missing code changes. Fix verified in the latest rpms. 2.6.32-410.el6.x86_64 lvm2-2.02.100-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 lvm2-libs-2.02.100-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 lvm2-cluster-2.02.100-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 udev-147-2.50.el6 BUILT: Fri Oct 11 05:58:10 CDT 2013 device-mapper-1.02.79-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 device-mapper-libs-1.02.79-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 device-mapper-event-1.02.79-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 device-mapper-event-libs-1.02.79-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 cmirror-2.02.100-6.el6 BUILT: Wed Oct 16 07:26:00 CDT 2013 [root@harding-02 ~]# lvcreate --type raid5 -i 2 -L 500M -n raid5 vg Using default stripesize 64.00 KiB Rounding size (125 extents) up to stripe boundary size (126 extents). Logical volume "raid5" created [root@harding-02 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices raid5 vg rwi-a-r--- 504.00m 100.00 raid5_rimage_0(0),raid5_rimage_1(0),raid5_rimage_2(0) [raid5_rimage_0] vg iwi-aor--- 252.00m /dev/sdb1(1) [raid5_rimage_1] vg iwi-aor--- 252.00m /dev/sdb2(1) [raid5_rimage_2] vg iwi-aor--- 252.00m /dev/sdb3(1) [raid5_rmeta_0] vg ewi-aor--- 4.00m /dev/sdb1(0) [raid5_rmeta_1] vg ewi-aor--- 4.00m /dev/sdb2(0) [raid5_rmeta_2] vg ewi-aor--- 4.00m /dev/sdb3(0) [root@harding-02 ~]# lvs -o name,segtype,raid_min_recovery_rate,raid_max_recovery_rate vg LV Type MinSync MaxSync raid5 raid5 [root@harding-02 ~]# lvchange --minrecoveryrate 50 vg/raid5 Logical volume "raid5" changed. [root@harding-02 ~]# lvs -o name,segtype,raid_min_recovery_rate,raid_max_recovery_rate vg LV Type MinSync MaxSync raid5 raid5 50 [root@harding-02 ~]# lvchange --maxrecoveryrate 100 vg/raid5 Logical volume "raid5" changed. [root@harding-02 ~]# lvs -o name,segtype,raid_min_recovery_rate,raid_max_recovery_rate vg LV Type MinSync MaxSync raid5 raid5 50 100 [root@harding-02 ~]# lvchange --maxrecoveryrate 10 vg/raid5 Minumum recovery rate cannot be higher than maximum. [root@harding-02 ~]# lvchange --minrecoveryrate 200 vg/raid5 Minumum recovery rate cannot be higher than maximum. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1704.html |