Bug 1687920 - RHVH fails to reinstall if required size is exceeding the available disk space due to anaconda bug
Summary: RHVH fails to reinstall if required size is exceeding the available disk spac...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: redhat-virtualization-host
Version: 4.2.8
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ovirt-4.3.5
: 4.3.5
Assignee: Yuval Turgeman
QA Contact: Qin Yuan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-12 16:13 UTC by Donald Berry
Modified: 2022-07-09 14:13 UTC (History)
13 users (show)

Fixed In Version: redhat-virtualization-host-4.3.5-20190710.2.el7_7
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-12 11:54:27 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs from https://access.redhat.com/solutions/20358 (287.27 KB, application/x-bzip)
2019-03-12 16:13 UTC, Donald Berry
no flags Details
files in /tmp (206.13 KB, application/gzip)
2019-03-12 16:15 UTC, Donald Berry
no flags Details
unknown error has occurred (69.63 KB, image/png)
2019-03-13 14:43 UTC, Donald Berry
no flags Details
details if more info is clicked (85.85 KB, image/png)
2019-03-13 14:44 UTC, Donald Berry
no flags Details
Anaconda logs (204.04 KB, application/gzip)
2019-03-13 18:10 UTC, Ryan Barry
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-47451 0 None None None 2022-07-09 14:13:08 UTC
Red Hat Product Errata RHSA-2019:2437 0 None None None 2019-08-12 11:54:48 UTC
oVirt gerrit 101058 0 'None' MERGED scripts: removed autopart from kickstart 2021-01-01 18:08:51 UTC
oVirt gerrit 101616 0 'None' MERGED scripts: removed autopart from kickstart 2021-01-01 18:08:51 UTC

Description Donald Berry 2019-03-12 16:13:52 UTC
Created attachment 1543256 [details]
logs from https://access.redhat.com/solutions/20358

Description of problem:

Reinstalling RHVH 4.2 fails with "an unknown error has occurred" shortly after the GUI installer starts up (before INSTALLATION DESTINATION has been configured).

When I click "more info" it says "/usr/lib/python2.7/site-packages/blivet/devices/lvm.py", line 620, in _setSizeraise ValueError("not enough free space in volume group"

Version-Release number of selected component (if applicable):
RHVH-4.2-20190219.0-RHVH-x86_64-dvd1.iso

How reproducible:
I saw the same thing yesterday with RHVH-4.2-20180508.1-RHVH-x86_64-dvd1.iso

Steps to Reproduce:
1. attempt to reinstall RHVH over an existing installation
2.
3.

Actual results:
see above

Expected results:


Additional info:

Comment 1 Donald Berry 2019-03-12 16:15:09 UTC
Created attachment 1543258 [details]
files in /tmp

Comment 2 cshao 2019-03-13 00:02:05 UTC
Hi Dberry,
I can't reproduce this issue, could you please help to provide more detail info?
Machine type?
Disk size?
Can this be reproduced with clean install?

Thanks.

Comment 3 Donald Berry 2019-03-13 00:09:29 UTC
This is on a Dell r640, 3 disks (1x447GB RAID 1 SSD, 2x2TB SATA).
I haven't tried it with a clean install - these servers were all provisioned with default beaker builds as part of the racking.
To get around this issue I switch to a shell (alt-f2) and wipe the disks (dd if=/dev/zero of=/dev/sd[abcd] bs=512 count=32, wipefs -fa /dev/sd[abc], also delete partitions with fdisk). Not sure which of these are needed. Reinstall then works.
You are welcome to try it on our lab servers, I think it is reproducible. Contact me and we can discuss it.

Comment 4 Yuval Turgeman 2019-03-13 09:00:18 UTC
Looking at the logs, it seems like there's a 'clearpart --none' so no partitions are deleted, perhaps that's why you're running out of space ?

Comment 5 Donald Berry 2019-03-13 14:29:39 UTC
Yuval, I did not kickstart these. Is there some option in the anaconda GUI that would set that?

Comment 6 Donald Berry 2019-03-13 14:43:13 UTC
Created attachment 1543683 [details]
unknown error has occurred

Tried reinstalling. This window (an unknown error has occurred) pops up after selecting the language, before clicking anything else (KEYBOARD, NSTALLATION DESTINATION etc.)

Comment 7 Donald Berry 2019-03-13 14:44:07 UTC
Created attachment 1543684 [details]
details if more info is clicked

Comment 9 Ryan Barry 2019-03-13 18:10:27 UTC
Created attachment 1543718 [details]
Anaconda logs

Comment 10 Ryan Barry 2019-03-13 18:12:15 UTC
Samantha, Anaconda is giving up before it gets anywhere. As soon as the GUI starts and installation destination is selected, it appears to run through the LVs and crash, without the option to remove existing filesystems.

Any ideas? Logs are attached.

Comment 11 Yuval Turgeman 2019-03-13 19:31:18 UTC
Does deselecting and then reselecting the drive help ?

Comment 12 Donald Berry 2019-03-13 19:46:40 UTC
It crashes before you can do that.

- install RHVH
- attempt to reinstall RHVH, it fails immediately after selecting language. Perhaps it would fail without even doing that if I wait a few more seconds.
- open a shell (alt-f2) and clear the disks (parted, fdisk, dd, wipefs)
- reinstall then works

Note, I did not click 'installation destination', just language.

Comment 14 Qin Yuan 2019-03-14 08:11:08 UTC
I saw the output of pvs and lvs were:
  PV         VG   Fmt  Attr PSize   PFree  
  /dev/sda2  rhvh lvm2 a--  360.43g <71.27g
  /dev/sdb1  sata lvm2 a--   <1.82t      0 
  /dev/sdc1  sata lvm2 a--   <1.82t      0 

  LV                          VG   Attr       LSize    Pool   Origin                    Data%  Meta%  Move Log Cpy%Sync Convert
  home                        rhvh Vwi-a-tz--    1.00g pool00                           4.79                                   
  pool00                      rhvh twi-aotz--  283.99g                                  2.98   2.13                            
  rhvh-4.2.3.0-0.20180508.0   rhvh Vwi---tz-k <250.00g pool00 root                                                             
  rhvh-4.2.3.0-0.20180508.0+1 rhvh Vwi-a-tz-- <250.00g pool00 rhvh-4.2.3.0-0.20180508.0 2.32                                   
  root                        rhvh Vwi-a-tz-- <250.00g pool00                           2.33                                   
  swap                        rhvh -wi-a-----    4.00g                                                                         
  tmp                         rhvh Vwi-a-tz--    1.00g pool00                           4.83                                   
  var                         rhvh Vwi-a-tz--   15.00g pool00                           3.85                                   
  var_crash                   rhvh Vwi-a-tz--   10.00g pool00                           2.86                                   
  var_log                     rhvh Vwi-a-tz--  <15.00g pool00                           2.51                                   
  var_log_audit               rhvh Vwi-a-tz--    2.00g pool00                           4.79                                   
  vms                         sata -wi-a-----   <3.64t                                           

RHVH was installed only on sda, and there was an extra VG named sata. I tried to simulate this scenario, that install RHVH only on one disk, create pv, vg, and lv on another two disks, then reinstall RHVH, still can't reproduce the issue.

I also noticed the physical sector size of sda is 4096B, not sure if this matters, as my disks' physical sector size are all 512B.
Model: ATA DELLBOSS VD (scsi)
Disk /dev/sda: 480GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags: 

How did you do the partitioning? Is it still reproducible when keep sdb and sdc clean without the extra VG?

Comment 15 Donald Berry 2019-03-14 16:46:56 UTC
I reproduced this on another box today (dell-r640-02).

1. RHEL server 7.6 was installed, with 3 PVs & 2 VGs created by the kickstart:

	part pv.01 --ondisk=$SSD --grow
	volgroup ssd pv.01
	logvol swap --vgname=ssd --name=swap --fstype=swap --recommended
	logvol / --vgname=ssd --name=root --fstype=xfs --percent=80

	part pv.02 --ondisk=$SATA1 --grow
	part pv.03 --ondisk=$SATA2 --grow
	volgroup sata pv.02 pv.03
	logvol /vms --vgname=sata --name=vms --fstype=xfs --size 1 --grow

Here is LVM output on a RHEL box with the same kickstart:

[root@dell-r640-10 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree
  /dev/sda1  sata lvm2 a--    <1.82t    0 
  /dev/sdb1  sata lvm2 a--    <1.82t    0 
  /dev/sdc2  ssd  lvm2 a--  <446.58g    0 

[root@dell-r640-10 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree
  sata   2   1   0 wz--n-   <3.64t    0 
  ssd    1   2   0 wz--n- <446.58g    0 

[root@dell-r640-10 ~]# lvs
  LV   VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vms  sata -wi-ao----   <3.64t                                                    
  root ssd  -wi-ao---- <442.58g                                                    
  swap ssd  -wi-ao----    4.00g                                        

2. installed RHVH, selecting just the SSD. I had to reclaim space, but it installed fine.
3. reinstalled RHVH, it crashed, and the PVs on VG sata are still visible in tty2 with 'pvs'.
4. Qin was also on the console and wiped $SATA1 and $SATA2 with dd (/dev/sda, /dev/sda1, /dev/sdb, /dev/sdb1).
5. reinstalled RHVH, it crashed, so it seems you need to wipe all 3 disks prior to reinstall.

Here is LVM output on a working RHVH host:

[root@dell-r640-04 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree  
  /dev/sdc2  rhvh lvm2 a--  <360.44g <71.27g

[root@dell-r640-04 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree  
  rhvh   1  11   0 wz--n- <360.44g <71.27g

[root@dell-r640-04 ~]# lvs
  LV                          VG   Attr       LSize    Pool   Origin                    Data%  Meta%  Move Log Cpy%Sync Convert
  home                        rhvh Vwi-aotz--    1.00g pool00                           4.79                                   
  pool00                      rhvh twi-aotz-- <284.00g                                  3.01   0.57                            
  rhvh-4.2.3.0-0.20180508.0   rhvh Vwi---tz-k  250.00g pool00 root                                                             
  rhvh-4.2.3.0-0.20180508.0+1 rhvh Vwi-aotz--  250.00g pool00 rhvh-4.2.3.0-0.20180508.0 2.32                                   
  root                        rhvh Vwi-a-tz--  250.00g pool00                           2.33                                   
  swap                        rhvh -wi-ao----    4.00g                                                                         
  tmp                         rhvh Vwi-aotz--    1.00g pool00                           4.91                                   
  var                         rhvh Vwi-aotz--   15.00g pool00                           3.71                                   
  var_crash                   rhvh Vwi-aotz--   10.00g pool00                           2.86                                   
  var_log                     rhvh Vwi-aotz--  <15.00g pool00                           2.59                                   
  var_log_audit               rhvh Vwi-aotz--    2.00g pool00                           4.83                           

I have also provided console connection details to Yuval.

Comment 16 Sandro Bonazzola 2019-03-19 09:09:21 UTC
Targeting to 4.3.4 for investigation, looks like an anaconda bug specific to this environment.

Comment 17 Qin Yuan 2019-03-21 08:19:13 UTC
I found some clues in the attached logs, and managed to reproduce the issue on my machine.

1. The clues in the logs:

1) In anaconda-tb-A5PGbc:

14:05:36,881 DEBUG blivet: doAutoPart: True
14:05:36,881 DEBUG blivet: autoPartType: 3
14:05:36,881 DEBUG blivet: clearPartType: None
14:05:36,881 DEBUG blivet: clearPartDisks: []
14:05:36,897 DEBUG blivet: storage.disks: [u'sda', u'sdb', u'sdc']
14:05:36,897 DEBUG blivet: storage.partitioned: [u'sda', u'sdb', u'sdc']
14:05:36,898 DEBUG blivet: boot disk: sda
14:05:36,903 DEBUG blivet: candidate disks: [DiskDevice instance (0x7f32a9fb2cd0) --
  name = sda  status = True  kids = 2 id = 39
  parents = []
  uuid = None  size = 447.07 GiB
  format = existing msdos disklabel
  major = 8  minor = 0  exists = True  protected = False
  sysfs path = /sys/devices/pci0000:3a/0000:3a:00.0/0000:3b:00.0/ata15/host16/target16:0:0/16:0:0:0/block/sda
  target size = 447.07 GiB  path = /dev/sda
  format args = []  originalFormat = disklabel  removable = False]
14:05:36,955 DEBUG blivet: created partition sda3 of 1024 and added it to /dev/sda
14:05:36,967 DEBUG blivet: created partition sda5 of 500 and added it to /dev/sda
14:05:37,035 INFO blivet: added partition sda4 (id 193) to device tree
14:05:37,040 INFO blivet: added lvmvg rhvh00 (id 196) to device tree
14:05:37,042 DEBUG blivet: rhvh00 size is 84.63 GiB
14:05:37,047 DEBUG blivet: vg rhvh00 has 84.63 GiB free
14:05:37,051 DEBUG blivet: Adding rhvh00-pool00/0 B to rhvh00
14:05:37,053 DEBUG blivet: Adding rhvh00-root/6144 MiB to rhvh00
14:05:37,058 DEBUG blivet: Adding rhvh00-root/6144 MiB to rhvh00-pool00
14:05:37,065 DEBUG blivet: Adding rhvh00-home/1024 MiB to rhvh00
14:05:37,070 DEBUG blivet: Adding rhvh00-home/1024 MiB to rhvh00-pool00
14:05:37,078 DEBUG blivet: Adding rhvh00-tmp/1024 MiB to rhvh00
14:05:37,082 DEBUG blivet: Adding rhvh00-tmp/1024 MiB to rhvh00-pool00
14:05:37,090 DEBUG blivet: Adding rhvh00-var/15 GiB to rhvh00
14:05:37,094 DEBUG blivet: Adding rhvh00-var/15 GiB to rhvh00-pool00
14:05:37,102 DEBUG blivet: Adding rhvh00-var_log/8192 MiB to rhvh00
14:05:37,106 DEBUG blivet: Adding rhvh00-var_log/8192 MiB to rhvh00-pool00
14:05:37,114 DEBUG blivet: Adding rhvh00-var_log_audit/2048 MiB to rhvh00
14:05:37,119 DEBUG blivet: Adding rhvh00-var_log_audit/2048 MiB to rhvh00-pool00
14:05:37,126 DEBUG blivet: rhvh00 size is 84.63 GiB
14:05:37,131 DEBUG blivet: vg rhvh00 has 67.7 GiB free
14:05:37,136 DEBUG blivet: Adding rhvh00-swap/4096 MiB to rhvh00
14:05:37,142 DEBUG blivet: rhvh size is 360.43 GiB
14:05:37,147 DEBUG blivet: vg rhvh has 70.44 GiB free
14:05:37,151 DEBUG blivet: vg rhvh: 75635884032 free ; lvs: ['pool00', 'home', 'root', 'rhvh-4.2.3.0-0.20180508.0', 'rhvh-4.2.3.0-0.20180508.0+1', 'var', 'var_crash', 'swap', 'tmp', 'var_log_audit', 'var_log']
14:05:37,152 DEBUG blivet: trying to set lv rhvh-pool00 size to 793.98 GiB
14:05:37,176 DEBUG blivet: failed to set size: 437.55 GiB short

Traceback (most recent call first):
  File "/usr/lib/python2.7/site-packages/blivet/devices/lvm.py", line 620, in _setSize
    raise ValueError("not enough free space in volume group")
  File "/usr/lib/python2.7/site-packages/blivet/partitioning.py", line 2308, in growLVM
    lv.size = lv.req_size
  File "/usr/lib/python2.7/site-packages/blivet/partitioning.py", line 449, in doAutoPartition
    growLVM(storage)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/kickstart.py", line 324, in execute
    doAutoPartition(storage, ksdata, min_luks_entropy=MIN_CREATE_ENTROPY)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/kickstart.py", line 2527, in doKickstartStorage
    ksdata.autopart.execute(storage, ksdata, instClass)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/storage.py", line 362, in _doExecute
    doKickstartStorage(self.storage, self.data, self.instclass)
  File "/usr/lib64/python2.7/threading.py", line 765, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
    threading.Thread.run(self, *args, **kwargs)
ValueError: not enough free space in volume group

2) In lvm_vgs, lvm_pvs, lvm_lvs:
  VG   #PV #LV #SN Attr   VSize   VFree  
  rhvh   1  11   0 wz--n- 360.43g <71.27g

  PV         VG   Fmt  Attr PSize   PFree  
  /dev/sda2  rhvh lvm2 a--  360.43g <71.27g

  LV                          VG   Attr       LSize    Pool   Origin                    Data%  Meta%  Move Log Cpy%Sync Convert
  home                        rhvh Vwi-a-tz--    1.00g pool00                           4.79                                   
  pool00                      rhvh twi-aotz--  283.99g                                  2.98   2.13                            
  rhvh-4.2.3.0-0.20180508.0   rhvh Vwi---tz-k <250.00g pool00 root                                                             
  rhvh-4.2.3.0-0.20180508.0+1 rhvh Vwi-a-tz-- <250.00g pool00 rhvh-4.2.3.0-0.20180508.0 2.32                                   
  root                        rhvh Vwi-a-tz-- <250.00g pool00                           2.33                                   
  swap                        rhvh -wi-a-----    4.00g                                                                         
  tmp                         rhvh Vwi-a-tz--    1.00g pool00                           4.83                                   
  var                         rhvh Vwi-a-tz--   15.00g pool00                           3.85                                   
  var_crash                   rhvh Vwi-a-tz--   10.00g pool00                           2.86                                   
  var_log                     rhvh Vwi-a-tz--  <15.00g pool00                           2.51                                   
  var_log_audit               rhvh Vwi-a-tz--    2.00g pool00                           4.79                                   
  vms                         sata -wi-a-----   <3.64t                                         


In ks.cfg of RHVH iso, clearpart is not configured, so it defaults to none. The size of sda is 447.07 GiB, /dev/sda2 only used 360.43g, there are more than 80GiB left free. The free space on sda is big enough to let Anaconda to create required partitions on it and go to growLVM step. If you look at the growLVM function in /usr/lib/python2.7/site-packages/blivet/partitioning.py, it iterates all vgs to resize all pools to make sure the pool's base size is at least the sum of its lvs',
    
    for vg in storage.vgs:
        total_free = vg.freeSpace
        if total_free < 0:
            # by now we have allocated the PVs so if there isn't enough
            # space in the VG we have a real problem
            raise PartitioningError(_("not enough space for LVM requests"))
        elif not total_free:
            log.debug("vg %s has no free space", vg.name)
            continue

        log.debug("vg %s: %d free ; lvs: %s", vg.name, total_free,
                                              [l.lvname for l in vg.lvs])

        # don't include thin lvs in the vg's growth calculation
        fatlvs = [lv for lv in vg.lvs if lv not in vg.thinlvs]
        requests = []
        for lv in fatlvs:
            if lv in vg.thinpools:
                # make sure the pool's base size is at least the sum of its lvs'
                lv.req_size = max(lv.minSize, lv.req_size, lv.usedSpace)
                lv.size = lv.req_size

The _setSize method in /usr/lib/python2.7/site-packages/blivet/devices/lvm.py,

    def _setSize(self, size):
        if not isinstance(size, Size):
            raise ValueError("new size must of type Size")

        size = self.vg.align(size)
        log.debug("trying to set lv %s size to %s", self.name, size)
        if size <= self.vg.freeSpace + self.vgSpaceUsed:
            self._size = size
            self.targetSize = size
        else:
            log.debug("failed to set size: %s short", size - (self.vg.freeSpace + self.vgSpaceUsed))
            raise ValueError("not enough free space in volume group")

In our case, there are two vgs, the newly created rhvh00, and the existing rhvh. For rhvh vg, the pool is pool00, all lvs on pool00 are ['home', 'root', 'rhvh-4.2.3.0-0.20180508.0', 'rhvh-4.2.3.0-0.20180508.0+1', 'var', 'var_crash', 'tmp', 'var_log_audit', 'var_log'], blivet adds the size of each lv in the list to get the sum, which is close to 1G+250G+250G+250G+15G+10G+1G+2G+15G=794G, the sum is much bigger than the disk's size, so _setSize raised "not enough free space in volume group" error.


2. Steps to reproduce:
According to the above analysis, the key point to reproduce this issue is to install rhvh and leave a suitable free space on the disk, then try to reinstall. 

I reproduced the issue on a machine with a disk of 199GiB, the steps are:

1) Install rhvh-4.2-20190219.0, partition as following:
ignoredisk --only-use=/dev/disk/by-id/scsi-360a9800050334c33424b41762d726954
zerombr
clearpart --all
bootloader --location=mbr
reqpart --add-boot
part pv.01  --size=110000
volgroup rhvh pv.01 --reserved-percent=2
logvol swap --fstype=swap --name=swap --vgname=rhvh --recommended
logvol none --name=pool --vgname=rhvh --thinpool --size=90000 --grow
logvol / --fstype=ext4 --name=root --vgname=rhvh --thin --poolname=pool --size=6000 --grow
logvol /var --fstype=ext4 --name=var --vgname=rhvh --thin --poolname=pool --size=15360

After installation, the pv,vg,lvs are:
[anaconda root@vm-73-237 ~]# pvs
  PV                                              VG   Fmt  Attr PSize    PFree 
  /dev/mapper/360a9800050334c33424b41762d726954p2 rhvh lvm2 a--  <107.42g <1.20g
[anaconda root@vm-73-237 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree 
  rhvh   1  11   0 wz--n- <107.42g <1.20g
[anaconda root@vm-73-237 ~]# lvs
  LV                          VG   Attr       LSize  Pool Origin                    Data%  Meta%  Move Log Cpy%Sync Convert
  home                        rhvh Vwi---tz--  1.00g pool                                                                  
  pool                        rhvh twi---tz-- 89.42g                                                                       
  rhvh-4.2.8.3-0.20190219.0   rhvh Vwi---tz-k 74.42g pool root                                                             
  rhvh-4.2.8.3-0.20190219.0+1 rhvh Vwi---tz-- 74.42g pool rhvh-4.2.8.3-0.20190219.0                                        
  root                        rhvh Vwi---tz-- 74.42g pool                                                                  
  swap                        rhvh -wi------- 15.75g                                                                       
  tmp                         rhvh Vwi---tz--  1.00g pool                                                                  
  var                         rhvh Vwi---tz-- 15.00g pool                                                                  
  var_crash                   rhvh Vwi---tz-- 10.00g pool                                                                  
  var_log                     rhvh Vwi---tz--  8.00g pool                                                                  
  var_log_audit               rhvh Vwi---tz--  2.00g pool  

The pv is about 107GiB, about 90GiB free space is left on the disk.

2) Reinstall rhvh using RHVH-4.2-20190219.0-RHVH-x86_64-dvd1.iso, the same error occured:
16:55:57,654 DEBUG blivet: rhvh_vm-73-237 size is 89.57 GiB
16:55:57,664 DEBUG blivet: vg rhvh_vm-73-237 has 89.57 GiB free
16:55:57,674 DEBUG blivet: Adding rhvh_vm-73-237-pool00/0 B to rhvh_vm-73-237
16:55:57,870 DEBUG blivet: rhvh size is 107.42 GiB
16:55:57,881 DEBUG blivet: vg rhvh has 252 MiB free
16:55:57,891 DEBUG blivet: vg rhvh: 264241152 free ; lvs: ['pool', 'home', 'root', 'var', 'var_crash', 'swap', 'rhvh-4.2.8.3-0.20190219.0', 'rhvh-4.2.8.3-0.20190219.0+1', 'tmp', 'var_log_audit', 'var_log']
16:55:57,892 DEBUG blivet: trying to set lv rhvh-pool size to 260.27 GiB
16:55:57,943 DEBUG blivet: failed to set size: 168.6 GiB short

Comment 18 Qin Yuan 2019-03-22 09:17:38 UTC
Donald, how you did partitioning when install RHVH? If automatically partitioning was selected, the PV was supposed to occupy almost the whole SSD disk, there shouldn't be a more than 80GiB free space left on the disk. The PVs of rhel and rhvh on your system are:

[root@dell-r640-10 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree
  /dev/sda1  sata lvm2 a--    <1.82t    0 
  /dev/sdb1  sata lvm2 a--    <1.82t    0 
  /dev/sdc2  ssd  lvm2 a--  <446.58g    0 

[root@dell-r640-04 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree  
  /dev/sdc2  rhvh lvm2 a--  <360.44g <71.27g

As you can see, for rhel, /dev/sdc2 is about 446.58g, which is close to the disk size 447.0g. While for rhvh, /dev/sdc2 is only about 360.44g.

If you indeed selected automatically partitioning on Anaconda GUI when install RHVH, then it might be another strange problem...

Comment 19 Donald Berry 2019-03-23 00:17:36 UTC
I selected just the DELLBOSS 447 GB disk as the install disk (deselect the other two SATA disks).
I selected ‘I will configure partitioning’
I clicked ‘create automatically’ and reduced / from 325 GiB to 250 GiB; increased /var/log from 8192 MiB to 15 GiB

Comment 20 Qin Yuan 2019-03-25 08:26:42 UTC
(In reply to Donald Berry from comment #19)
> I selected just the DELLBOSS 447 GB disk as the install disk (deselect the
> other two SATA disks).
> I selected ‘I will configure partitioning’
> I clicked ‘create automatically’ and reduced / from 325 GiB to 250 GiB;
> increased /var/log from 8192 MiB to 15 GiB

If you only want to increase /var/log, no special requirement for /, then one way to avoid reinstallation failure is to clear the "Desired Capacity" of / and click "Update Settings" after increasing /var/log to 15GiB.

Comment 21 Sandro Bonazzola 2019-06-18 08:23:28 UTC
We need a bug on anaconda for the traceback on comment #17, can you please open it and make it blocking this bug?

Comment 22 Qin Yuan 2019-06-19 22:30:09 UTC
Tried to reproduce bug 1361788 with RHVH-4.3-20190512.3-RHVH-x86_64-dvd1.iso, installation failed with the same error as this bug, which is:

Traceback (most recent call first):
  File "/usr/lib/python2.7/site-packages/blivet/devices/lvm.py", line 620, in _setSize
    raise ValueError("not enough free space in volume group")
  File "/usr/lib/python2.7/site-packages/blivet/partitioning.py", line 2308, in growLVM
    lv.size = lv.req_size
  File "/usr/lib/python2.7/site-packages/blivet/partitioning.py", line 449, in doAutoPartition
    growLVM(storage)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/kickstart.py", line 324, in execute
    doAutoPartition(storage, ksdata, min_luks_entropy=MIN_CREATE_ENTROPY)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/kickstart.py", line 2527, in doKickstartStorage
    ksdata.autopart.execute(storage, ksdata, instClass)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/storage.py", line 362, in _doExecute
    doKickstartStorage(self.storage, self.data, self.instclass)
  File "/usr/lib64/python2.7/threading.py", line 765, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
    threading.Thread.run(self, *args, **kwargs)
ValueError: not enough free space in volume group

If I understand correctly, for these two bugs, Anaconda had better to get the correct required size at the beginning, and tell user to rearrange the storage if there is no enough space for the required size.

As bug 1361788 is already for Anaconda, do I still need to report a new one?

Comment 23 Sandro Bonazzola 2019-06-21 06:12:18 UTC
(In reply to Qin Yuan from comment #22)

> As bug 1361788 is already for Anaconda, do I still need to report a new one?

No need, thanks

Comment 24 Yuval Turgeman 2019-06-23 08:58:28 UTC
Removing autopart from the iso's kickstart will let the user reclaim space from an existing installation (which is what this bug's about).
Please notice that if the disk is too small for the required installation, anaconda will still complain and for that we have bug 1361788

Comment 25 Qin Yuan 2019-06-23 11:32:21 UTC
It's a good idea to remove autopart cmd from the iso's kickstart, I don't think it's reasonable to only set autopart without setting clearpart. But I have one concern that it's been declared in the product doc that auto partitioning is the recommended partitioning method for RHVH installation, if autopart cmd is removed from the iso's kickstart, the default partitioning method will be "I will configure partitioning" when user enter the installation destination page, not sure if it's ok from the product's perspective?

Comment 26 Yuval Turgeman 2019-06-23 12:15:39 UTC
When selecting the storage spike, the current layout of the disk is displayed, the default is thinp (from our intsallclass), and the user can select "click here to create automatically", it will be equivalent to autopart --thinp.  The only difference is that the user can reclaim used space and anaconda won't fail

Comment 27 Sandro Bonazzola 2019-06-25 08:12:56 UTC
(In reply to Yuval Turgeman from comment #26)
> When selecting the storage spike, the current layout of the disk is
> displayed, the default is thinp (from our intsallclass), and the user can
> select "click here to create automatically", it will be equivalent to
> autopart --thinp.  The only difference is that the user can reclaim used
> space and anaconda won't fail

Discussed in today meeting with Martin and QE, we rather prefer to clear the partitions and run autopart directly.

Comment 28 Sandro Bonazzola 2019-07-03 08:00:17 UTC
In current situation if the disk is not clean you can't reclaim the space using RHV-H iso because wiht autopart enabled it fails and exit immediately.
removing the autopart command, you will be able to reclaim space and still use the autopart instead of the manual partitioning but it will require a few extra click.
The proposed patch does this. Martin what do you think about it? Alternative is to re-target to 4.4 and wait for anaconda to fix this.

Comment 29 Martin Tessun 2019-07-03 16:15:20 UTC
(In reply to Sandro Bonazzola from comment #28)
> In current situation if the disk is not clean you can't reclaim the space
> using RHV-H iso because wiht autopart enabled it fails and exit immediately.
> removing the autopart command, you will be able to reclaim space and still
> use the autopart instead of the manual partitioning but it will require a
> few extra click.
> The proposed patch does this. Martin what do you think about it? Alternative
> is to re-target to 4.4 and wait for anaconda to fix this.

Sounds reasonable to me. But we should clearly document this.

Comment 30 Yuval Turgeman 2019-07-04 12:31:50 UTC
Moving back to POST based on comment 29

Comment 32 Qin Yuan 2019-07-14 03:16:56 UTC
Versions:
RHVH-4.3-20190711.1-RHVH-x86_64-dvd1.iso

Steps:
1. Install RHVH on a 199GiB disk:
   1) select "click here to create them automatically" on manual partitioning page, 
   2) reduce root from 115GiB to 65GiB to save about 60GiB free space
   3) continue to finish installation
2. Reinstall RHVH

Results:
1. When reinstall RHVH, there is no "not enough free space in volume group" error before entering the installation destination page.
2. Could delete existing partitions and create new partitions on manual partitioning page.
3. Reinstall succeed.

There won't be exception preventing user from creating partitions manually when there is no enough available space, the bug is fixed, move to VERIFIED.

Comment 35 errata-xmlrpc 2019-08-12 11:54:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2437


Note You need to log in before you can comment on or make changes to this bug.