RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 829920 - LVM RAID creation should be able to use '-l 100%FREE'
Summary: LVM RAID creation should be able to use '-l 100%FREE'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.3
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 1042510 BrassowRHEL6Bugs 1085904
TreeView+ depends on / blocked
 
Reported: 2012-06-07 19:57 UTC by Corey Marthaler
Modified: 2014-10-14 08:23 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.107-1.el6
Doc Type: Bug Fix
Doc Text:
It is now possible to create RAID logical volumes using the %FREE argument available to the '-l/--extents' argument of the 'lvcreate' command. The resulting size is approximately equal to the desired percentage in this case - resulting from adjustments that need to be made for RAID metadata areas.
Clone Of:
: 1042510 1085904 (view as bug list)
Environment:
Last Closed: 2014-10-14 08:23:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1387 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2014-10-14 01:39:47 UTC

Description Corey Marthaler 2012-06-07 19:57:35 UTC
Description of problem:
[root@taft-01 ~]# pvscan
  PV /dev/sdb1   VG test        lvm2 [67.83 GiB / 67.83 GiB free]
  PV /dev/sdc1   VG test        lvm2 [67.83 GiB / 67.83 GiB free]
  PV /dev/sda2   VG vg_taft01   lvm2 [67.75 GiB / 0    free]
  Total: 3 [203.41 GiB] / in use: 3 [203.41 GiB] / in no VG: 0 [0   ]

[root@taft-01 ~]# lvcreate -m 1 -n mirror -l100%FREE --corelog test
  Insufficient free space: 69456 extents needed, but only 34728 available

[root@taft-01 ~]# lvcreate -m 1 -n raid -l100%FREE --type raid1 test
  Insufficient free space: 69460 extents needed, but only 34728 available

[root@taft-01 ~]# lvcreate -i 2 -n stripe -l100%FREE test
  Using default stripesize 64.00 KiB
  Logical volume "stripe" created


Version-Release number of selected component (if applicable):
2.6.32-278.el6.x86_64
lvm2-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-libs-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
lvm2-cluster-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
device-mapper-event-libs-1.02.74-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012
cmirror-2.02.95-10.el6    BUILT: Fri May 18 03:26:00 CDT 2012

Comment 1 RHEL Program Management 2012-07-10 05:57:39 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 2 RHEL Program Management 2012-07-10 23:58:54 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 3 Peter Rajnoha 2012-10-19 13:46:44 UTC
I've run into this many times myself and I expected exactly the same behaviour as requested in this report. However, when thinking of it, it's quite disputable what is *the correct* behaviour. When looking this up in the man page, currently the -l option is defined as:

 -l, --extents LogicalExtentsNumber[%{VG|PVS|FREE|ORIGIN}]
              Gives the number of logical extents to allocate for the new logical volume.

Taking into account the mirror/raid volume, that is the target size of the top-level device representing the mirror volume, not the size of all subdevices making up this volume.

Of course, we could redefine this and recalculate the size so that -l 100%FREE would mean "use all the available space in the VG for *all the devs* that make the top-level device", in case of mirror, counting with all the legs + mirror log if saved on disk. But that would mean changing already existing definition and this could cause confusion among users that are used to the old way of defining size for mirrors. So I think we simply can't do that.

Another option would be to define an extra cmd line switch used together with the -l option that would tell us to incorporate all the subdevs instead of the top-level device in size calculation... Otherwise, if not done this way, we have to close this with WONTFIX.

Comment 4 Alasdair Kergon 2012-12-03 16:55:00 UTC
We do need to fix this, but it's not straightforward.
100%FREE means use up all the free space.
50%FREE means use up half the free space.

The current behaviour doesn't make sense and should be changed.

Comment 6 lejeczek 2013-09-14 16:18:20 UTC
this problem still exists in lvm2-2.02.98-9.el6.x86_64

# lvcreate -d --type raid5 -i 7 -l +100%vg -n raid5-0 -r none

  Using default stripesize 64.00 KiB
  Rounding size (3814392 extents) down to stripe boundary size (3814391 extents)
  Insufficient free space: 4359319 extents needed, but only 3814392 available

100%VG would make sense if for eg. raid5 uses -i 7 in eight devices VG, like above

I'd imagine this kind of simpler version of "special cases" should be allowed for in the code.

code might currently be taking 100%VG and afterwards requesting additional space for parity data whereas is should be the other way around, no?
100%VG => workout-calculate data / parity relationship

Comment 8 Jonathan Earl Brassow 2014-04-09 15:37:16 UTC
This bug now covers only RAID.  Bug 1085904 covers mirrors.

Comment 9 Jonathan Earl Brassow 2014-04-09 15:43:56 UTC
The changes that make this possible are already upstream.  There are a number of patches that are part of this fix, but the last code change associated with this bug occurred here:
  commit b359b86f88642888116d54d4204d367664fbdcf5
  Author: Alasdair G Kergon <agk>
  Date:   Mon Feb 24 22:48:23 2014 +0000

... and the last non-code (test suite) change is here:
  commit 38ab4c31a65b6ade5ec1e49dca4ef596a9c80923
  Author: Jonathan Brassow <jbrassow>
  Date:   Thu Feb 27 22:44:57 2014 -0600

Comment 11 Corey Marthaler 2014-07-15 19:24:05 UTC
Fix verified for raid volumes in the latest build.

2.6.32-485.el6.x86_64
lvm2-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
lvm2-libs-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
lvm2-cluster-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
udev-147-2.55.el6    BUILT: Wed Jun 18 06:30:21 CDT 2014
device-mapper-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-libs-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-event-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-event-libs-1.02.86-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 08:43:06 CDT 2014
cmirror-2.02.107-2.el6    BUILT: Fri Jul 11 08:47:33 CDT 2014



# Same sized legs

[root@host-003 ~]# pvscan
  PV /dev/sda1   VG test         lvm2 [7.50 GiB / 7.50 GiB free]
  PV /dev/sdb1   VG test         lvm2 [7.50 GiB / 7.50 GiB free]

[root@host-003 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree 
  test         2   0   0 wz--n- 14.99g 14.99g

[root@host-003 ~]# lvcreate -m 1 -n raid -l100%FREE --type raid1 test
  Logical volume "raid" created

[root@host-003 ~]# lvs -a -o +devices
  LV              VG   Attr       LSize   Cpy%Sync Devices
  raid            test rwi-a-r---   7.49g 0.00     raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test Iwi-aor---   7.49g          /dev/sda1(1)
  [raid_rimage_1] test Iwi-aor---   7.49g          /dev/sdb1(1)
  [raid_rmeta_0]  test ewi-aor---   4.00m          /dev/sda1(0)
  [raid_rmeta_1]  test ewi-aor---   4.00m          /dev/sdb1(0)



# Different sized legs

[root@host-003 ~]# pvcreate --setphysicalvolumesize 500M /dev/sda1
  Physical volume "/dev/sda1" successfully created
[root@host-003 ~]# pvcreate --setphysicalvolumesize 1000M /dev/sdb1
  Physical volume "/dev/sdb1" successfully created

[root@host-003 ~]# vgcreate test /dev/sd[ab]1
  Volume group "test" successfully created

[root@host-003 ~]# pvscan
  PV /dev/sda1   VG test         lvm2 [496.00 MiB / 496.00 MiB free]
  PV /dev/sdb1   VG test         lvm2 [996.00 MiB / 996.00 MiB free]

[root@host-003 ~]# lvcreate -m 1 -n raid -l100%FREE --type raid1 test
  Logical volume "raid" created

[root@host-003 ~]# lvs -a -o +devices
  LV              VG   Attr       LSize   Cpy%Sync Devices
  raid            test rwi-a-r--- 492.00m 50.41    raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test Iwi-aor--- 492.00m          /dev/sdb1(1)
  [raid_rimage_1] test Iwi-aor--- 492.00m          /dev/sda1(1)
  [raid_rmeta_0]  test ewi-aor---   4.00m          /dev/sdb1(0)
  [raid_rmeta_1]  test ewi-aor---   4.00m          /dev/sda1(0)



# More legs then needed

[root@host-003 ~]# pvcreate /dev/sd[abc]1
  Physical volume "/dev/sda1" successfully created
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdc1" successfully created

[root@host-003 ~]# vgcreate test /dev/sd[abc]1
  Volume group "test" successfully created

[root@host-003 ~]#  lvcreate -m 1 -n raid -l100%FREE --type raid1 test
  Logical volume "raid" created

[root@host-003 ~]# lvs -a -o +devices
  LV              VG   Attr       LSize   Cpy%Sync Devices
  raid            test rwi-a-r---   7.49g 59.38    raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test Iwi-aor---   7.49g          /dev/sda1(1)
  [raid_rimage_1] test Iwi-aor---   7.49g          /dev/sdb1(1)
  [raid_rmeta_0]  test ewi-aor---   4.00m          /dev/sda1(0)
  [raid_rmeta_1]  test ewi-aor---   4.00m          /dev/sdb1(0)



# More legs then needed, and different sized

[root@host-001 ~]# pvcreate --setphysicalvolumesize 500M /dev/sda1
  Physical volume "/dev/sda1" successfully created
[root@host-001 ~]# pvcreate --setphysicalvolumesize 1000M /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
[root@host-001 ~]# pvcreate --setphysicalvolumesize 1000M /dev/sdc1
  Physical volume "/dev/sdc1" successfully created

[root@host-001 ~]# pvscan
  PV /dev/sda1                   lvm2 [500.00 MiB]
  PV /dev/sdb1                   lvm2 [1000.00 MiB]
  PV /dev/sdc1                   lvm2 [1000.00 MiB]

[root@host-001 ~]# vgcreate test /dev/sd[abc]1
  Volume group "test" successfully created

[root@host-001 ~]#  lvcreate -m 1 -n raid -l100%FREE --type raid1 test
  Logical volume "raid" created

[root@host-001 ~]# lvs -a -o +devices
  LV              VG   Attr       LSize   Cpy%Sync Devices
  raid            test rwi-a-r--- 992.00m 38.31    raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] test Iwi-aor--- 992.00m          /dev/sdb1(1)
  [raid_rimage_1] test Iwi-aor--- 992.00m          /dev/sdc1(1)
  [raid_rmeta_0]  test ewi-aor---   4.00m          /dev/sdb1(0)
  [raid_rmeta_1]  test ewi-aor---   4.00m          /dev/sdc1(0)

Comment 12 errata-xmlrpc 2014-10-14 08:23:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html


Note You need to log in before you can comment on or make changes to this bug.