RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1783946 - The volgroup option --reserved-percent doesn't work
Summary: The volgroup option --reserved-percent doesn't work
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: python-blivet
Version: 8.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.3
Assignee: Blivet Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks: 1841131
TreeView+ depends on / blocked
 
Reported: 2019-12-16 10:07 UTC by Qin Yuan
Modified: 2021-09-06 15:33 UTC (History)
20 users (show)

Fixed In Version: python-blivet-3.2.2-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1841131 (view as bug list)
Environment:
Last Closed: 2020-11-04 03:22:23 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
installation logs in /tmp (1.08 MB, application/gzip)
2020-03-20 14:00 UTC, Qin Yuan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github storaged-project blivet pull 830 0 None closed Allow for reserved vg space and a growable thin pool. (#1783946) 2021-02-18 03:24:13 UTC

Description Qin Yuan 2019-12-16 10:07:53 UTC
Description of problem:
Install RHVH 4.4 with a ks file having the following partitioning cmds:

ignoredisk --only-use=sda
zerombr
clearpart --all
bootloader --location=mbr
reqpart --add-boot
part pv.01 --size=200000
volgroup rhvh pv.01 --reserved-percent=10
logvol swap --fstype=swap --name=swap --vgname=rhvh --recommended
logvol none --name=pool --vgname=rhvh --thinpool --size=1 --grow
logvol / --fstype=ext4 --name=root --vgname=rhvh --thin --poolname=pool --size=10000 --grow
logvol /var --fstype=ext4 --name=var --vgname=rhvh --thin --poolname=pool --size=15000

Installation could succeed, but the free space of vg is 0:
[root@ati_local_01 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree
  rhvh   1  11   0 wz--n- <195.31g    0 


Version-Release number of selected component (if applicable):
RHVH-4.4-20191205.t.1-RHVH-x86_64-dvd1.iso

How reproducible:
100%

Steps to Reproduce:
1. Prepare a ks file containing the above partitioning cmds
2. Install RHVH 4.4 using the ks file
3. Check VFree after installation finished.

Actual results:
1. The free space of vg is 0.

Expected results:
1. The free space of vg should be equal to (Vsize * reserved-percent)

Additional info:

Comment 1 Yuval Turgeman 2019-12-29 15:11:29 UTC
Has this ever worked ?

Comment 2 Qin Yuan 2019-12-30 01:21:05 UTC
Yes, it has been working ever since Bug 1131247 was fixed. 

The update of Bug 1131247 brings the following changes: * If you create a LVM thin pool with automatic partitioning, 20 % of the volume group size is reserved, with a minimum of 1 GiB and a maximum of 100 GiB. * If you use the `logvol --thinpool --grow` command in a Kickstart file, the thin pool will grow to the maximum possible size, which means no space will be left for it in the volume group to grow. In this case, you can use the `volgroup --reserved-space` or `volgroup --reserved-percent` command to leave some space in the volume group reserved, which is recommended.

Comment 3 Yuval Turgeman 2019-12-30 08:15:20 UTC
Right, I checked on both also, anaconda reserves some space (not really 10%) on 7.7, but it doesn't work the same on 8.  Looks like a blivet issue, David any idea ?

Comment 4 David Lehman 2020-03-11 17:15:38 UTC
I added a link to an untested pull request. If you tell me a tree/compose or blivet version I can make an updates image to allow you to test the change.

Comment 5 Qin Yuan 2020-03-12 05:29:08 UTC
The latest RHVH 4.4 is consuming python-blivet-3.1.0-20.el8

Comment 6 David Lehman 2020-03-18 16:25:32 UTC
To test the proposed patch, please add the following to the installer boot/kernel command line:

  inst.updates=http://people.redhat.com/~dlehman/updates/lvm-thin-reserved.0.img


Please let me know how it goes and attach logs if there is a failure, whether it be the same one or something new.

Thanks.

Comment 7 Qin Yuan 2020-03-20 13:59:32 UTC
Tested RHVH-UNSIGNED-ISO-4.4-RHEL-8-20200318.0-RHVH-x86_64-dvd1.iso with the given inst.updates, --reserved-percent still doesn't work.

Comment 8 Qin Yuan 2020-03-20 14:00:37 UTC
Created attachment 1671918 [details]
installation logs in /tmp

Comment 9 Lukas Svaty 2020-03-25 16:23:13 UTC
Due to comment#2 adding Regression keyword, and targeting to 4.4.0.

Comment 15 Jan Stodola 2020-04-20 16:19:33 UTC
Reproduced on RHEL-8.2 with the following ks:

text
keyboard --vckeymap=us --xlayouts='us'
lang en_US.UTF-8
rootpw redhat
timezone Europe/Prague --isUtc
reboot

# Disk partitioning information
clearpart --all --initlabel
bootloader --location=mbr
reqpart --add-boot
part pv.01 --size=8000
volgroup vg pv.01 --reserved-percent=10
logvol swap --fstype=swap --name=swap --vgname=vg --size=1000
logvol none --name=pool --vgname=vg --thinpool --size=1 --grow
logvol / --name=root --vgname=vg --thin --poolname=pool --size=4000 --grow

%packages 
@base
%end

[root@localhost ~]# vgs
  VG #PV #LV #SN Attr   VSize  VFree
  vg   1   3   0 wz--n- <7.81g    0 
[root@localhost ~]#

Comment 19 Vojtech Trefny 2020-05-15 10:54:08 UTC
Qin Yuan can you please test this with this new updates image: http://file.emea.redhat.com/~vtrefny/img/rhbz1783946.img It contains fix previously mentioned by Dave Lehman and also fix for rhbz#1737490 which is related to this issue. I tested this updates image with kickstart posted by Jan Stodola in comment #15 and it worked for me (created VG had 800 MB free space which is 10 % as requested in the kickstart).

Comment 20 Qin Yuan 2020-05-19 05:33:52 UTC
Tested the update img:

1. Failed to install on dirty disks, same as https://bugzilla.redhat.com/show_bug.cgi?id=1766498

2. Install on clean disk, use kickstart in comment #0, VFree is 9% of VSize, not 10%. In vgdisplay, Free size is almost 10% of Alloc size, see:

[root@ati-local-02 ~]# vgs --units m
  VG   #PV #LV #SN Attr   VSize      VFree    
  rhvh   1  11   0 wz--n- 199996.00m 18120.00m

[root@ati-local-02 ~]# vgdisplay --units m
  --- Volume group ---
  VG Name               rhvh
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  40
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                11
  Open LV               8
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               199996.00 MiB
  PE Size               4.00 MiB
  Total PE              49999
  Alloc PE / Size       45469 / 181876.00 MiB
  Free  PE / Size       4530 / 18120.00 MiB
  VG UUID               EZKzwq-X6V3-03By-lTMm-UFw6-ivpZ-GVqC6j


Shouldn't VFree=VSize*10% ?

Comment 21 Vojtech Trefny 2020-05-19 11:17:48 UTC
I've tested this again with kickstart from the comment #0 and it still works for me, I have 5000 of 49999 physical extents free:

# vgdisplay rhvh --units=m
  --- Volume group ---
  VG Name               rhvh
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               199996.00 MiB
  PE Size               4.00 MiB
  Total PE              49999
  Alloc PE / Size       44999 / 179996.00 MiB
  Free  PE / Size       5000 / 20000.00 MiB
  VG UUID               w5nA38-PCZQ-TOFh-M6HC-Wpfp-8h0L-7DQom3


Your vgdisplay output shows 11 total LVs, but the kickstart contains only 4 (swap, thinpool and 2 thin LVs for / and /var) -- this could be reason behind the difference in our tests.

Comment 22 Qin Yuan 2020-05-20 00:32:13 UTC
I used RHVH-4.4-20200507.1-RHVH-x86_64-dvd1.iso + inst.updates, rhvh will create required NIST LVs automatically if they are not specified in ks, and also rhvh layer, see:

[root@ati-local-02 ~]# lvs
  LV                           VG   Attr       LSize   Pool Origin                     Data%  Meta%  Move Log Cpy%Sync Convert
  home                         rhvh Vwi-aotz--   1.00g pool                            4.79                                   
  pool                         rhvh twi-aotz-- 159.89g                                 5.12   2.14                            
  rhvh-4.4.0.18-0.20200507.0   rhvh Vwi---tz-k 145.24g pool root                                                              
  rhvh-4.4.0.18-0.20200507.0+1 rhvh Vwi-aotz-- 145.24g pool rhvh-4.4.0.18-0.20200507.0 4.74                                   
  root                         rhvh Vri---tz-k 145.24g pool                                                                   
  swap                         rhvh -wi-ao----  15.72g                                                                        
  tmp                          rhvh Vwi-aotz--   1.00g pool                            4.82                                   
  var                          rhvh Vwi-aotz-- <14.65g pool                            3.64                                   
  var_crash                    rhvh Vwi-aotz--  10.00g pool                            2.24                                   
  var_log                      rhvh Vwi-aotz--   8.00g pool                            2.45                                   
  var_log_audit                rhvh Vwi-aotz--   2.00g pool                            4.77   


Even if I define all the NIST LVs in ks, like below, vgs shows the same result:

zerombr
clearpart --all
bootloader --location=mbr
reqpart --add-boot
part pv.01 --size=200000
volgroup rhvh pv.01 --reserved-percent=10
logvol swap --fstype=swap --name=swap --vgname=rhvh --recommended
logvol none --name=pool --vgname=rhvh --thinpool --size=1 --grow
logvol / --fstype=ext4 --name=root --vgname=rhvh --thin --poolname=pool --size=10000 --grow
logvol /var --fstype=ext4 --name=var --vgname=rhvh --thin --poolname=pool --size=15000
logvol /var/log --fstype=ext4 --name=var_log --vgname=rhvh --thin --poolname=pool --size=8192
logvol /var/log/audit --fstype=ext4 --name=var_log_audit --vgname=rhvh --thin --poolname=pool --size=2048
logvol /home --fstype=ext4 --name=home --vgname=rhvh --thin --poolname=pool --size=1024
logvol /tmp --fstype=ext4 --name=tmp --vgname=rhvh --thin --poolname=pool --size=1024
logvol /var/crash --fstype=ext4 --name=var_crash --vgname=rhvh --thin --poolname=pool --size=10240

Comment 23 Vojtech Trefny 2020-05-20 11:28:01 UTC
I unfortunately wasn't able to find the RHVH-4.4-20200507.1-RHVH-x86_64-dvd1.iso image on our devel mirrors so I couldn't test this but creating additional LVs and snapshots in the %post scripts will change the free space in the VG -- the 1% decrease in the free space is most likely caused by the pool metadata and/or pmspare grow.

Comment 24 Qin Yuan 2020-05-22 05:44:30 UTC
The 1% decrease in free space is caused by pool_tmeta and pmspare.

Using ks in comment #22, before runing %post section, check lv, vg:

[anaconda root@dell-per730-34 ~]# lvs -a
  LV              VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home            rhvh Vwi-aotz--   1.00g pool        4.79                                   
  [lvol0_pmspare] rhvh ewi-------  84.00m                                                    
  pool            rhvh twi-aotz-- 159.89g             4.50   15.10                           
  [pool_tdata]    rhvh Twi-ao---- 159.89g                                                    
  [pool_tmeta]    rhvh ewi-ao----  84.00m                                                    
  root            rhvh Vwi-aotz-- 123.24g pool        4.90                                   
  swap            rhvh -wi-ao----  15.72g                                                    
  tmp             rhvh Vwi-aotz--   1.00g pool        4.79                                   
  var             rhvh Vwi-aotz-- <14.65g pool        3.79                                   
  var_crash       rhvh Vwi-aotz--  10.00g pool        2.24                                   
  var_log         rhvh Vwi-aotz--   8.00g pool        2.45                                   
  var_log_audit   rhvh Vwi-aotz--   2.00g pool        4.76  

[anaconda root@dell-per730-34 ~]# vgdisplay --units m
  --- Volume group ---
  VG Name               rhvh
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                9
  Open LV               8
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               199996.00 MiB
  PE Size               4.00 MiB
  Total PE              49999
  Alloc PE / Size       44999 / 179996.00 MiB
  Free  PE / Size       5000 / 20000.00 MiB
  VG UUID               n37jk9-6u85-IwJZ-0L7g-V3HF-g6DY-YoOCyO

As you can see pmspare and pool_tmeta are both 84m, vgfree is 10% of vgsize.

After %post section finished, check lv, vg:

[root@ati-local-02 ~]# lvs -a
  LV                           VG   Attr       LSize   Pool Origin                     Data%  Meta%  Move Log Cpy%Sync Convert
  home                         rhvh Vwi-aotz--   1.00g pool                            4.79                                   
  [lvol0_pmspare]              rhvh ewi-------   1.00g                                                                        
  pool                         rhvh twi-aotz-- 159.89g                                 4.58   2.09                            
  [pool_tdata]                 rhvh Twi-ao---- 159.89g                                                                        
  [pool_tmeta]                 rhvh ewi-ao----   1.00g                                                                        
  rhvh-4.4.0.18-0.20200507.0   rhvh Vwi---tz-k 123.24g pool root                                                              
  rhvh-4.4.0.18-0.20200507.0+1 rhvh Vwi-aotz-- 123.24g pool rhvh-4.4.0.18-0.20200507.0 4.90                                   
  root                         rhvh Vri---tz-k 123.24g pool                                                                   
  swap                         rhvh -wi-ao----  15.72g                                                                        
  tmp                          rhvh Vwi-aotz--   1.00g pool                            4.80                                   
  var                          rhvh Vwi-aotz-- <14.65g pool                            3.63                                   
  var_crash                    rhvh Vwi-aotz--  10.00g pool                            2.24                                   
  var_log                      rhvh Vwi-aotz--   8.00g pool                            2.50                                   
  var_log_audit                rhvh Vwi-aotz--   2.00g pool                            4.77       

 [root@ati-local-02 ~]# vgdisplay --units m
  --- Volume group ---
  VG Name               rhvh
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  35
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                11
  Open LV               8
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               199996.00 MiB
  PE Size               4.00 MiB
  Total PE              49999
  Alloc PE / Size       45469 / 181876.00 MiB
  Free  PE / Size       4530 / 18120.00 MiB
  VG UUID               n37jk9-6u85-IwJZ-0L7g-V3HF-g6DY-YoOCyO
   

pool_tmeta and pmspare are both extended to 1024m, it requires (1024m-84m)*2=1880m from free space of vg. And vgfree is reduced from 20000m to 18120m.

In rhvh post section, there is a flow to increase pool meta to 1G. Now we can say the reserve-percent option works as expected.

Comment 30 Jakub Rusz 2020-06-16 13:20:40 UTC
Test for this is created and merged.

Comment 31 Petr Janda 2020-06-26 12:35:39 UTC
Verified on RHEL-8.3.0-20200616.0 x86_64

Comment 32 cshao 2020-07-08 02:46:27 UTC
The fixed version "python-blivet-3.2.2-1.el8" is for 8.3, so the target milestone should be changed to 8.3 but not 8.2.

Comment 35 errata-xmlrpc 2020-11-04 03:22:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (python-blivet bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4728


Note You need to log in before you can comment on or make changes to this bug.