Description of problem: Anaconda doesn't allow to reclaim disk space after marking all existing partitions/disks to delete in the "Reclaim disk space" dialog. The "Reclaim space" button stays inactive. See attached screenshot. It happens if I use the "Delete all" button as well as when deleting all partitions/disks one by one. Version-Release number of selected component (if applicable): RHEL-8.3.0-20200701.2 anaconda-33.16.3.10-1.el8 python3-blivet-3.2.2-3.el8 How reproducible: Always? 2 out of 2 attempts Steps to Reproduce: 1. Start vnc installation on a system where RHEL-7 was pre-installed. The default partitioning layout was created in RHEL-7. 2. Go to the Installation destination spoke - the disk is already selected, so tick reclaiming disk space 3. In the "Reclaim disk space" dialog window click the "Delete all" button 4. Try yo click the "Reclaim space" button Actual results: The "Reclaim space" button is inactive Expected results: It's possible to click the "Reclaim space" button and reclaim disk space. Additional info: So far only reproduced on s390x when installing in LPAR mode. Reproduced with one DASD disk (more DASD disks have not been tested).
Created attachment 1700470 [details] lvm.log
Created attachment 1700474 [details] screnshot
The same result when using the same disk in a z/VM guest, so this problem is not LPAR specific. I'm not able to reproduce it with RHEL-8.2 GA. QE note: This was reproduced on RTT LPAR/z/VM guests with DASD 4009
More notes: 1) After re-installing the disk with RHEL-8.2 GA (default partitioning), next installation of RHEL-8.3 successfully reclaimed disk space. 2) After re-installing another z/VM guest with RHEL-7.8 (default partitioning), also using one DASD disk, but with a different size, this problem was also reproducible there (RTT z/VM guest a7, DASD: 3027)
Workaround is to go to the Custom partitioning screen, remove existing mount points and let anaconda create the partitions.
I was just about to file a similar (or actually the same) bug and found this one in a preliminary search. I bumped into the same problem with anabot not being able to reclaim disk space and finish installation, and I wasn't able to complete the task either. On top of that, it also looked like I was trapped in the disk partitioning spoke, the only way to exit to the main hub was to deselect the disk and click on Done (I'm mentioning this for the sake of completeness, as I'm not sure at all if this is a correct behaviour). I think that this only happens when you first select a package set that requires more space than what can be made available even after removing all of the existing partitions (in my case the selected space to reclaim was 8 GiB, whereas the selected package set required 8.06 GiB). After having a look at the screenshot from Honza, it looks like the same happened in his case. This assumption also proved correct when I switched to a package set with a smaller footprint - in such a case I was able to reclaim the disk space as expected. I'd say the logical outcome of the aforementioned situation would be to just let the user reclaim the space and after exiting the partitioning spoke display a warning on the bottom of the screen, telling the user that more space is needed for the selected package set.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.
I'm re-opening the bug. BTW, the same problem exists on RHEL-9 as well.
I'm reopening the bug as it's still present (RHEL-8.8) and I've been hitting it every now and then during Anaconda testing.
Upstream PR: https://github.com/rhinstaller/anaconda/pull/4546
Cloned as a RHEL-9 bug 2187371.
PR: https://github.com/rhinstaller/anaconda/pull/4826
Bug fix had successful pre-verification, setting Verified: Tested.
Checked that anaconda-33.16.9.3-1.el8 is in nightly compose RHEL-8.9.0-20230716.29 There's no documentation needed for this bug - not checking Necessary tests were successful as stated in Comment 26 Moving to VERIFIED