Bug 834050
| Summary: | Unable to create striped raid on VGs with 1k extent sizes | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> | |
| Component: | lvm2 | Assignee: | Jonathan Earl Brassow <jbrassow> | |
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | |
| Severity: | low | Docs Contact: | ||
| Priority: | high | |||
| Version: | 6.3 | CC: | agk, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, tlavigne, zkabelac | |
| Target Milestone: | rc | |||
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | lvm2-2.02.109-2.el6 | Doc Type: | Bug Fix | |
| Doc Text: |
No doc text required.
RAID with stripe size < page_size has always been disallowed, but when VG extent size was < page size a failure was allowed to happen.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1067112 (view as bug list) | Environment: | ||
| Last Closed: | 2014-10-14 08:23:27 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1067112, 1075263 | |||
|
Description
Corey Marthaler
2012-06-20 17:57:43 UTC
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux. This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4. Add raid10 to this mix. [root@taft-01 ~]# lvcreate --type raid10 -i 3 -n raid_on_1Kextent_vg -L 60M raid_sanity Using default stripesize 64.00 KiB Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB device-mapper: reload ioctl on failed: Invalid argument Failed to activate new LV. Failure on the reload ioctl? That's never meant to happen when it can be detected in advance and prevented. The minimum stripe size for RAID targets is 4kiB:
linux/drivers/md/dm-raid.c:
} else if (!is_power_of_2(value)) {
rs->ti->error = "Chunk size must be a power of 2";
return -EINVAL;
} else if (value < 8) {
rs->ti->error = "Chunk size value is too small";
return -EINVAL;
}
Maximum value for stripe size is PE size in LVM. If PE size is less that 4kiB, then there is a problem for RAID stripe LVs.
Does there need to be this restriction in LVM? See lvm2/lib/metadata/lv_manip.c:_validate_stripesize(). If the restriction can be lifted in LVM, RAID with PE sizes < 4kiB would work. (Is this even worth fixing? It should at least be caught before the ioctl in LVM - but where?)
Fix committed upstream:
commit 4d45302e25f5fe1251bdd8f2c49c4a75a4a31d2e
Author: Jonathan Brassow <jbrassow>
Date: Fri Aug 15 21:15:34 2014 -0500
RAID: Fail RAID4/5/6 creation if PE size is less than STRIPE_SIZE_MIN
The maximum stripe size is equal to the volume group PE size. If that
size falls below the STRIPE_SIZE_MIN, the creation of RAID 4/5/6 volumes
becomes impossible. (The kernel will fail to load a RAID 4/5/6 mapping
table with a stripe size less than STRIPE_SIZE_MIN.) So, we report an
error if it is attempted.
This is very rare because reducing the PE size down that far limits the
size of the PV below that of modern devices.
[root@bp-01 lvm2]# for i in 4 5 6; do lvcreate --type raid$i -L 500M -n lv vg; done The extent size in volume group vg is too small to support striped RAID volumes. The extent size in volume group vg is too small to support striped RAID volumes. The extent size in volume group vg is too small to support striped RAID volumes. I can still see the same issue it seems: [root@virt-065 ~]# lvcreate --type raid4 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity Using default stripesize 64.00 KiB Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB. Error locking on node virt-065: device-mapper: reload ioctl on failed: Invalid argument Failed to activate new LV. [root@virt-065 ~]# lvcreate --type raid5 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity Using default stripesize 64.00 KiB Reducing requested stripe size 64.00 KiB to maximum, physical extent size 1.00 KiB. Error locking on node virt-065: device-mapper: create ioctl on raid_sanity-raid_on_1Kextent_vg_rmeta_0 failed: Device or resource busy Failed to activate raid_sanity/raid_on_1Kextent_vg_rmeta_0 for clearing [root@virt-065 ~]# dmsetup ls raid_sanity-raid_on_1Kextent_vg_rmeta_0 (253:2) raid_sanity-raid_on_1Kextent_vg_rimage_2 (253:7) raid_sanity-raid_on_1Kextent_vg_rimage_1 (253:5) raid_sanity-raid_on_1Kextent_vg_rimage_0 (253:3) vg_virt065-lv_swap (253:1) raid_sanity-raid_on_1Kextent_vg_rmeta_2 (253:6) vg_virt065-lv_root (253:0) raid_sanity-raid_on_1Kextent_vg_rmeta_1 (253:4) [root@virt-065 ~]# lvs -a LV VG Attr LSize Data% Meta% Move Log Cpy%Sync Convert raid_on_1Kextent_vg raid_sanity rwi---r--- 0 raid_on_1Kextent_vg_rmeta_0 raid_sanity ewi---r--- 1.00k raid_on_1Kextent_vg_rmeta_1 raid_sanity ewi---r--- 1.00k raid_on_1Kextent_vg_rmeta_2 raid_sanity ewi---r--- 1.00k [raid_on_1Kextent_vg_rimage_0] raid_sanity Iwi---r--- 30.00m [raid_on_1Kextent_vg_rimage_1] raid_sanity Iwi---r--- 30.00m [raid_on_1Kextent_vg_rimage_2] raid_sanity Iwi---r--- 30.00m lv_root vg_virt065 -wi-ao---- 6.71g lv_swap vg_virt065 -wi-ao---- 816.00m If I try to remove these devices now I get errors: [root@virt-065 ~]# lvremove raid_sanity Logical volume "raid_on_1Kextent_vg" successfully removed Can't remove logical volume raid_on_1Kextent_vg_rmeta_0 used as RAID device Can't remove logical volume raid_on_1Kextent_vg_rmeta_1 used as RAID device Can't remove logical volume raid_on_1Kextent_vg_rmeta_2 used as RAID device [root@virt-065 ~]# lvs LV VG Attr LSize Data% Meta% Move Log Cpy%Sync Convert raid_on_1Kextent_vg_rmeta_0 raid_sanity -wi------- 1.00k raid_on_1Kextent_vg_rmeta_1 raid_sanity -wi------- 1.00k raid_on_1Kextent_vg_rmeta_2 raid_sanity -wi------- 1.00k lv_root vg_virt065 -wi-ao---- 6.71g lv_swap vg_virt065 -wi-ao---- 816.00m [root@virt-065 ~]# dmsetup ls raid_sanity-raid_on_1Kextent_vg_rmeta_0 (253:2) raid_sanity-raid_on_1Kextent_vg_rimage_2 (253:7) raid_sanity-raid_on_1Kextent_vg_rimage_1 (253:5) raid_sanity-raid_on_1Kextent_vg_rimage_0 (253:3) vg_virt065-lv_swap (253:1) raid_sanity-raid_on_1Kextent_vg_rmeta_2 (253:6) vg_virt065-lv_root (253:0) raid_sanity-raid_on_1Kextent_vg_rmeta_1 (253:4) [root@virt-065 ~]# lvremove -ff raid_sanity Logical volume "raid_on_1Kextent_vg_rmeta_0" successfully removed Logical volume "raid_on_1Kextent_vg_rmeta_1" successfully removed Logical volume "raid_on_1Kextent_vg_rmeta_2" successfully removed [root@virt-065 ~]# dmsetup ls raid_sanity-raid_on_1Kextent_vg_rmeta_0 (253:2) raid_sanity-raid_on_1Kextent_vg_rimage_2 (253:7) raid_sanity-raid_on_1Kextent_vg_rimage_1 (253:5) raid_sanity-raid_on_1Kextent_vg_rimage_0 (253:3) vg_virt065-lv_swap (253:1) raid_sanity-raid_on_1Kextent_vg_rmeta_2 (253:6) vg_virt065-lv_root (253:0) raid_sanity-raid_on_1Kextent_vg_rmeta_1 (253:4) [root@virt-065 ~]# lvs LV VG Attr LSize Data% Meta% Move Log Cpy%Sync Convert lv_root vg_virt065 -wi-ao---- 6.71g lv_swap vg_virt065 -wi-ao---- 816.00m Now I have a "dirty" device mapper as well. [root@virt-065 ~]# vgs VG #PV #LV #SN Attr VSize VFree raid_sanity 8 0 0 wz--n- 119.98g 119.90g vg_virt065 1 2 0 wz--n- 7.51g 0 [root@virt-065 ~]# lvs -a LV VG Attr LSize Data% Meta% Move Log Cpy%Sync Convert [raid_on_1Kextent_vg_rimage_0] raid_sanity -wi------- 30.00m [raid_on_1Kextent_vg_rimage_1] raid_sanity -wi------- 30.00m [raid_on_1Kextent_vg_rimage_2] raid_sanity -wi------- 30.00m lv_root vg_virt065 -wi-ao---- 6.71g lv_swap vg_virt065 -wi-ao---- 816.00m This is all on: lvm2-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 lvm2-libs-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 lvm2-cluster-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 udev-147-2.57.el6 BUILT: Thu Jul 24 15:48:47 CEST 2014 device-mapper-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-libs-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-event-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-event-libs-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 15:43:06 CEST 2014 cmirror-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 Did not see that the new packages are actually not in the nightly builds I was using. Will re-test with new packages. [root@virt-065 ~]# vgcreate -s 1K raid_sanity /dev/sd{b..i}1
Clustered volume group "raid_sanity" successfully created
[root@virt-065 ~]# lvcreate --type raid4 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
Using default stripesize 64.00 KiB
The extent size in volume group raid_sanity is too small to support striped RAID volumes.
[root@virt-065 ~]# lvcreate --type raid5 -i2 -n raid_on_1Kextent_vg -L 60M raid_sanity
Using default stripesize 64.00 KiB
The extent size in volume group raid_sanity is too small to support striped RAID volumes.
[root@virt-065 ~]# lvcreate --type raid1 -m 1 -n radi_lv -L5G raid_sanity
Logical volume "radi_lv" created
marking VERIFIED with:
lvm2-2.02.109-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-libs-2.02.109-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
lvm2-cluster-2.02.109-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
udev-147-2.57.el6 BUILT: Thu Jul 24 15:48:47 CEST 2014
device-mapper-1.02.88-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-libs-1.02.88-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-1.02.88-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-event-libs-1.02.88-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 15:43:06 CEST 2014
cmirror-2.02.109-2.el6 BUILT: Tue Aug 19 16:32:25 CEST 2014
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1387.html |