Bug 829920
Summary: | LVM RAID creation should be able to use '-l 100%FREE' | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> | |
Component: | lvm2 | Assignee: | Jonathan Earl Brassow <jbrassow> | |
lvm2 sub component: | Mirroring and RAID (RHEL6) | QA Contact: | Cluster QE <mspqa-list> | |
Status: | CLOSED ERRATA | Docs Contact: | ||
Severity: | low | |||
Priority: | low | CC: | agk, dwysocha, heinzm, jbrassow, msnitzer, nperic, peljasz, prajnoha, prockai, thornber, zkabelac | |
Version: | 6.3 | |||
Target Milestone: | rc | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | lvm2-2.02.107-1.el6 | Doc Type: | Bug Fix | |
Doc Text: |
It is now possible to create RAID logical volumes using the %FREE argument available to the '-l/--extents' argument of the 'lvcreate' command. The resulting size is approximately equal to the desired percentage in this case - resulting from adjustments that need to be made for RAID metadata areas.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1042510 1085904 (view as bug list) | Environment: | ||
Last Closed: | 2014-10-14 08:23:20 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1042510, 1075263, 1085904 |
Description
Corey Marthaler
2012-06-07 19:57:35 UTC
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux. This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4. I've run into this many times myself and I expected exactly the same behaviour as requested in this report. However, when thinking of it, it's quite disputable what is *the correct* behaviour. When looking this up in the man page, currently the -l option is defined as: -l, --extents LogicalExtentsNumber[%{VG|PVS|FREE|ORIGIN}] Gives the number of logical extents to allocate for the new logical volume. Taking into account the mirror/raid volume, that is the target size of the top-level device representing the mirror volume, not the size of all subdevices making up this volume. Of course, we could redefine this and recalculate the size so that -l 100%FREE would mean "use all the available space in the VG for *all the devs* that make the top-level device", in case of mirror, counting with all the legs + mirror log if saved on disk. But that would mean changing already existing definition and this could cause confusion among users that are used to the old way of defining size for mirrors. So I think we simply can't do that. Another option would be to define an extra cmd line switch used together with the -l option that would tell us to incorporate all the subdevs instead of the top-level device in size calculation... Otherwise, if not done this way, we have to close this with WONTFIX. We do need to fix this, but it's not straightforward. 100%FREE means use up all the free space. 50%FREE means use up half the free space. The current behaviour doesn't make sense and should be changed. this problem still exists in lvm2-2.02.98-9.el6.x86_64 # lvcreate -d --type raid5 -i 7 -l +100%vg -n raid5-0 -r none Using default stripesize 64.00 KiB Rounding size (3814392 extents) down to stripe boundary size (3814391 extents) Insufficient free space: 4359319 extents needed, but only 3814392 available 100%VG would make sense if for eg. raid5 uses -i 7 in eight devices VG, like above I'd imagine this kind of simpler version of "special cases" should be allowed for in the code. code might currently be taking 100%VG and afterwards requesting additional space for parity data whereas is should be the other way around, no? 100%VG => workout-calculate data / parity relationship This bug now covers only RAID. Bug 1085904 covers mirrors. The changes that make this possible are already upstream. There are a number of patches that are part of this fix, but the last code change associated with this bug occurred here: commit b359b86f88642888116d54d4204d367664fbdcf5 Author: Alasdair G Kergon <agk> Date: Mon Feb 24 22:48:23 2014 +0000 ... and the last non-code (test suite) change is here: commit 38ab4c31a65b6ade5ec1e49dca4ef596a9c80923 Author: Jonathan Brassow <jbrassow> Date: Thu Feb 27 22:44:57 2014 -0600 Fix verified for raid volumes in the latest build. 2.6.32-485.el6.x86_64 lvm2-2.02.107-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 lvm2-libs-2.02.107-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 lvm2-cluster-2.02.107-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 udev-147-2.55.el6 BUILT: Wed Jun 18 06:30:21 CDT 2014 device-mapper-1.02.86-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 device-mapper-libs-1.02.86-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 device-mapper-event-1.02.86-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 device-mapper-event-libs-1.02.86-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 08:43:06 CDT 2014 cmirror-2.02.107-2.el6 BUILT: Fri Jul 11 08:47:33 CDT 2014 # Same sized legs [root@host-003 ~]# pvscan PV /dev/sda1 VG test lvm2 [7.50 GiB / 7.50 GiB free] PV /dev/sdb1 VG test lvm2 [7.50 GiB / 7.50 GiB free] [root@host-003 ~]# vgs VG #PV #LV #SN Attr VSize VFree test 2 0 0 wz--n- 14.99g 14.99g [root@host-003 ~]# lvcreate -m 1 -n raid -l100%FREE --type raid1 test Logical volume "raid" created [root@host-003 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices raid test rwi-a-r--- 7.49g 0.00 raid_rimage_0(0),raid_rimage_1(0) [raid_rimage_0] test Iwi-aor--- 7.49g /dev/sda1(1) [raid_rimage_1] test Iwi-aor--- 7.49g /dev/sdb1(1) [raid_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) # Different sized legs [root@host-003 ~]# pvcreate --setphysicalvolumesize 500M /dev/sda1 Physical volume "/dev/sda1" successfully created [root@host-003 ~]# pvcreate --setphysicalvolumesize 1000M /dev/sdb1 Physical volume "/dev/sdb1" successfully created [root@host-003 ~]# vgcreate test /dev/sd[ab]1 Volume group "test" successfully created [root@host-003 ~]# pvscan PV /dev/sda1 VG test lvm2 [496.00 MiB / 496.00 MiB free] PV /dev/sdb1 VG test lvm2 [996.00 MiB / 996.00 MiB free] [root@host-003 ~]# lvcreate -m 1 -n raid -l100%FREE --type raid1 test Logical volume "raid" created [root@host-003 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices raid test rwi-a-r--- 492.00m 50.41 raid_rimage_0(0),raid_rimage_1(0) [raid_rimage_0] test Iwi-aor--- 492.00m /dev/sdb1(1) [raid_rimage_1] test Iwi-aor--- 492.00m /dev/sda1(1) [raid_rmeta_0] test ewi-aor--- 4.00m /dev/sdb1(0) [raid_rmeta_1] test ewi-aor--- 4.00m /dev/sda1(0) # More legs then needed [root@host-003 ~]# pvcreate /dev/sd[abc]1 Physical volume "/dev/sda1" successfully created Physical volume "/dev/sdb1" successfully created Physical volume "/dev/sdc1" successfully created [root@host-003 ~]# vgcreate test /dev/sd[abc]1 Volume group "test" successfully created [root@host-003 ~]# lvcreate -m 1 -n raid -l100%FREE --type raid1 test Logical volume "raid" created [root@host-003 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices raid test rwi-a-r--- 7.49g 59.38 raid_rimage_0(0),raid_rimage_1(0) [raid_rimage_0] test Iwi-aor--- 7.49g /dev/sda1(1) [raid_rimage_1] test Iwi-aor--- 7.49g /dev/sdb1(1) [raid_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raid_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) # More legs then needed, and different sized [root@host-001 ~]# pvcreate --setphysicalvolumesize 500M /dev/sda1 Physical volume "/dev/sda1" successfully created [root@host-001 ~]# pvcreate --setphysicalvolumesize 1000M /dev/sdb1 Physical volume "/dev/sdb1" successfully created [root@host-001 ~]# pvcreate --setphysicalvolumesize 1000M /dev/sdc1 Physical volume "/dev/sdc1" successfully created [root@host-001 ~]# pvscan PV /dev/sda1 lvm2 [500.00 MiB] PV /dev/sdb1 lvm2 [1000.00 MiB] PV /dev/sdc1 lvm2 [1000.00 MiB] [root@host-001 ~]# vgcreate test /dev/sd[abc]1 Volume group "test" successfully created [root@host-001 ~]# lvcreate -m 1 -n raid -l100%FREE --type raid1 test Logical volume "raid" created [root@host-001 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices raid test rwi-a-r--- 992.00m 38.31 raid_rimage_0(0),raid_rimage_1(0) [raid_rimage_0] test Iwi-aor--- 992.00m /dev/sdb1(1) [raid_rimage_1] test Iwi-aor--- 992.00m /dev/sdc1(1) [raid_rmeta_0] test ewi-aor--- 4.00m /dev/sdb1(0) [raid_rmeta_1] test ewi-aor--- 4.00m /dev/sdc1(0) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1387.html |