Description of problem: When I try to install rawhide over a previous rawhide install I get an error dialog from the autopartitioning screen. How reproducible: every time Steps to Reproduce: 1. Start installing rawhide. 2. Get to the autopartitioning screen. 3. Press Next. Actual results: Dialog pops up with Error Partitioning ------------------ Could not allocate requested partitions: Unsatisfied partition request New Part Request -- mountpoint: None uniqueID: 6 type: physical volume (LVM) format: 1 badblocks: None device: None drive: ['hda'] primary: None size: 0 grow: 1 maxsize: None start: None end: None migrate: None origfstype: None _OK A second attempt at auto-partitioning gives a python backtrace. Expected results: No errors. Additional info: Partitioning manually with diskdruid works fine. This started happening some weeks ago I think.
What is your partitioning setup originally? What options are you selecting as far as which drives, etc? Could you attach the traceback you get?
The original partitioning is just a default autopartition from a previous rawhide install, or the defaults from diskdruid - so nothing special. I'll try to retrieve the backtrace when I have time.
I'm pretty sure that this is fixed in CVS as of now (dupe of bug 131325)
Confirmed fixed in rawhide-20040928. Thanks.
Bug #131325 is restricted. I'm seeing a similar problem in FC3 and CentOS 4.1 (ie RHEL4 u1) but not in FC4. Any clues as to what the fix is? Thanks!
anaconda maintains a pretty active ChangeLog you can use: 2004-09-27 ... * autopart.py (doAutoPartition): Fix LVM traceback if going back from not enough space (#131325) cvs is your friend - in this case the change seems to be in one file, cvs log suggests the following: cvs diff -r 1.147 -r 1.148 autopart.py It'd be helpful to know the steps you are taking and the setup of the machines, in case it is not the same - the reproducer for bug #131325 was which should be in RHEL4: Autopartition Remove none (use free space) Failure message Change to remove all partitions. Get traceback.
Err, yes, that's nice about the changelog, but as mentioned, the bug referred to there, #131325, is still restricted just like it was an hour ago. The problem happens on a bunch of new IBM 345 rackmount servers, with RAID'd disks (LSI controller). Autopartitioning fails immediately, with no need to choose remove none and then go back, so it may be a different issue. It also happens when using a kickstart script with --clearpart all.
True, but with the reproducer and the diff you should be able to see what's going on :) Please file a new bug for your issue.
That line (from cvs diff -r 1.147 -r 1.148 autopart.py) is already in the RHEL4u1 anaconda version, so that's not it. Clearly it's *something* fixed since FC4, though, and there's rather a few changes to that file since then. I'll file a new bug shortly.
New bug is #164633, by the way. And for what it's worth, the "cvs diff -r 1.147 -r 1.148 autopart.py" change is already in centos4/rhel4u1, so that ain't it.