Bug 1250627

Summary: lvcreate --type raid1 -m 1 -l 90%vg -n cache0 vgcache.0 /dev/sda /dev/sdd # takes up all 100%
Product: Red Hat Enterprise Linux 7 Reporter: lejeczek <peljasz>
Component: lvm2Assignee: Joe Thornber <thornber>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED NOTABUG Docs Contact:
Severity: high    
Priority: unspecified CC: agk, heinzm, jbrassow, mcsontos, msnitzer, prajnoha, zkabelac
Version: 7.1   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-07 09:46:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description lejeczek 2015-08-05 15:25:52 UTC
Description of problem:

is this normal? Does -l work?
Only if I specify size with -L 200G I see free PEs in VG.

Version-Release number of selected component (if applicable):

lvm2-2.02.115-3.el7_1.1.x86_64

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Marian Csontos 2015-08-10 15:28:12 UTC
Yes, `-l` works: it is specifying the approximate size of the LV created not the total space consumed. In this case 2 90%vg RAID legs are created, and LVM correctly caps that at 100%.

Comment 3 Marian Csontos 2015-08-10 15:36:49 UTC
Hmmm, seems it "works" and after all it IS a bug:

> When expressed as a  percentage, the  number  is  treated  as  an
> approximate  upper  limit for the total number of physical extents to be
> allocated (including extents used by any mirrors, for example).

Comment 4 Alasdair Kergon 2018-02-12 19:42:26 UTC
Not sure where this landed compared to various upstream changes and fixes to this particular code.

If you think it's important, it's worth trying again and posting full -vvvv output to reveal what it is actually doing.

Comment 5 Jonathan Earl Brassow 2018-04-03 15:24:59 UTC
This seems to work for me... Is there still a problem here?

[root@bp-01 ~]# pvs -S vgname=vg
  PV         VG Fmt  Attr PSize    PFree
  /dev/sdb1  vg lvm2 a--  <836.69g <836.69g
  /dev/sdc1  vg lvm2 a--  <836.69g <836.69g
[root@bp-01 ~]# vgs vg
  VG #PV #LV #SN Attr   VSize VFree
  vg   2   0   0 wz--n- 1.63t 1.63t
[root@bp-01 ~]# lvcreate -m 1 -l 90%VG -n lv vg
  Logical volume "lv" created.
[root@bp-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize    VFree
  rhel_bp-01   1   3   0 wz--n- <464.76g    4.00m
  vg           2   1   0 wz--n-    1.63t <167.34g
[root@bp-01 ~]# lvs vg
  LV   VG Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg rwi-a-r--- <753.02g                                    0.13
[root@bp-01 ~]# lvremove -ff vg
  Logical volume "lv" successfully removed
[root@bp-01 ~]# lvcreate -m 1 -l 90%FREE -n lv vg
  Logical volume "lv" created.
[root@bp-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize    VFree
  rhel_bp-01   1   3   0 wz--n- <464.76g    4.00m
  vg           2   1   0 wz--n-    1.63t <167.34g
[root@bp-01 ~]# lvs vg
  LV   VG Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg rwi-a-r--- <753.02g                                    0.10