Red Hat Bugzilla – Bug 498408
findExistingRootDevices teardown(recursive=True) fails with lvm over mdraid
Last modified: 2011-02-24 09:40:06 EST
Created attachment 341906 [details]
Description of problem:
Anaconda crashed at the end of partitioning. After a reboot, the installer crashes even before it gets to the partioning UI.
Steps to Reproduce:
- /boot as ext3 on an md RAID using partitions on sda and sdb
- swap using partitions on sda and sdb
- LVM volume on an md RAID using partitions on sda and sdb
- / LV as ext4 on the LVM volume group
- /var/www LV as ext4 on the LVM volume group
2. Anaconda crashes compaining about /var/www (something about nested volumes, sorry I didn't make a note)
3. Reboot - now fails (see attachment) before the partitioning UI
To install and use Fedora :-)
*** Bug 498479 has been marked as a duplicate of this bug. ***
This is probably the same bug as 495078
However, there are a few differences, and since the trace back created a new
bug report I thought that I would leave it the same.
It is a different machine, but the same general configuration.
This is with a fc11 beta install and a fc11 preview as the disk being used to
run the new install. That is I had tried to install with fc11 beta and failed,
I will be trying go back to a fc9 disk and erasing everything to try a new
The configuration I am trying to get to is boot and swap on disk sba The a
software raid partition on sba and sdb with a raid 1 volume as root. The
beta software says it loaded that way. I can now do a rescue boot with the
preview. The beta will not even do a rescue boot. When I do a rescue boot
the configuration I find has no relationship to what I entered.
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle.
Changing version to '11'.
More information and reason for this action is here:
I got the same issue with this kickstart:
clearpart --all --drives=sda,sdb --initlabel
clearpart --all --initlabel --drives=sda,sdb
part swap --size 512 --ondisk sda --grow --maxsize 1024
part swap --size 512 --ondisk sdb --grow --maxsize 1024
part raid.01 --size 100 --ondisk sda --asprimary
part raid.02 --size 100 --ondisk sdb --asprimary
raid /boot --level 1 --device md0 raid.01 raid.02 --fstype ext3
part raid.11 --size 1 --grow --ondisk sda
part raid.12 --size 1 --grow --ondisk sdb
raid pv.1 --level=1 --device md1 raid.11 raid.12
volgroup sysvg pv.1 --pesize=32768
logvol / --vgname=sysvg --size=10240 --name=lvrootfs --grow --maxsize=40960 --fstype=ext4
logvol /var/log --vgname=sysvg --size=4096 --name=lvlog --fstype=ext4
Anaconda complains about the free space on the vg(last message is: DEBUG vg sysvg has 0 MB free). It looks like it's not possible to create VGs on top of MD devices via kickstart, however it is possible to do it using the interactive installation. No trobles arises when VGs are created onto disk partitions.
OK I got this error with preupgrade and was able to file a bug report through the automated system. Then that bug was reported as being a duplicate of this one.
I decided to try the latest production fc11 disk and do a upgrade. I get more or less the same thing. The error status on all of the ctrl alt screens is the same BUT!!!!!
Now when I try and file a report it will not let me. As soon as I click on filing a report (save) it puts up the data entry screen for in your bugzilla login information, BUT THEN it pops up a screen that says "finding storage devices" Now I can still switch to the other screens. But the mouse cursor is the dreaded watch. The watch will move,, but you can't do anything.
Is there anything we can get doing to allow for this bug to be fixed quicker?
This bug seems to effect all machines which are using the /dev/md0 driver as a RAID 1. One would think this would be a high priority not a low one.
This time apparently preupgrade is letting me upgrade. The cdroms still will not.
Does this still happen with current anaconda (12-18). pls test with rawhide.
Everything right now at all eight offices has been upgraded or the data stored off and then reformated and loaded fresh. Even that doesn't work very well. I have several active bug reports on that where RAID 1 and RAID 5 don't seem to work right and I am left with an unbootable system. I have gotten so I can then manally make it bootable through.
I will be on travel for a week, and will try and build up a test system and check it next week.
Are you aiming at the upgrade problem, the fresh install problem, or the error message and error handling problem having gone away?????
Created attachment 365641 [details]
F12Beta x86_64 anaconda traceback
Getting this error trying to install f12b-x86_64.
Smolt profile: http://www.smolts.org/client/show/pub_aa591de8-132e-4232-8f45-42556d142513
I have to second that this is still present in fedora 12. Now I have gotten so good at getting around it as it happens so often that I kind of forgot about this bug report. Or at least in the end this bug was dealt with by doing an install, which left/leaves you with a completely unbootable system about 50% of the time. It hangs hangs right at the start of the boot process. ALL you have to do then is rescue boot and do a reinstall of the grub system and you are off and going.
This is doable, but really annoying, and probably not in the skill set of some people using Fedora.
PS see in the new title "fails with lvm over mdraid". If it helps the machines I am running this on that are giving me problems are in fact all using mdraid. I experience this with both RAID1 and RAID5. But I am not using lvm.
For me it seems to relate to systems that have undergone multiple upgrades through several consecutive versions of Fedora.
The failure in comment 10 is at least partly related to logical volumes with '-' in their names (bug 527302) which was fixed in anaconda-12.43-1 and is included in F12.
Ray, if you'd like to attach an F12 traceback I can try to track down what's happening for you.
I am not sure exactly what it is that you want?
Right now I am not actually getting any crashes or dumps during the installation or in this case upgrades. At this point it indeed may be a different bug. It certainly presents differently.
What specifically is happening is to upgrade, or in a couple cases install fedora, with a mdraid configuration. Usually something with both a mdraid md0 for /boot and a md1 for / both either raid1 or raid5. This is also on machines for which I reported this bug earlier. On the systems where I do an install instead of a upgrade it is still an upgrade as this is on a system with an existing installation, but where instead of the upgrade I select to do a complete scratch install.
Everything says that it has completed successfully I get to the screen where you kick on a button to reboot after if applicable you have removed the cdrom. Everything looks perfect, the bios screen comes up, and then at the point where you normally would see the the line that says that it is about to boot a specific kernel if you don't do anything, and where you could then press a key and edit to boot parameters, but instead of that you have a black screen with the cursor blinking in the top left of the screen.
At this point if you rescue boot it says everything is cool. If you then reinstall the grub loaded using grub-install you are up and going. Apparently the install system is not actually installing or properly installing the boot loader. This interestingly enough is the case if even on the page about the boot loader you tell it to do nothing and not change this.
(In reply to comment #14)
> I am not sure exactly what it is that you want?
> Right now I am not actually getting any crashes or dumps during the
> installation or in this case upgrades. At this point it indeed may be a
> different bug. It certainly presents differently.
If it's a different bug (it seems to be) it warrants a different report.
In any report, we'll want a description of the procedure to reproduce the bug, along with anaconda.log, anaconda.syslog, storage.log, program.log, install.log, and sometimes yum.log and/or anaconda.xlog from the installation. You could get them in rescue mode from the following locations:
/mnt/sysimage/root/install.log (or upgrade.log if more appropriate)
The information you just gave in your last comment, in addition to the logs, would be a very good start.
Thanks for your patience.
(In reply to comment #15)
Interesting. Bugzilla just emailed me that you had posted this.
As requested in 14 have now opened a new bug 556539
Ok, so does that mean we are in agreement that this bug can be closed?
This bug appears to have been reported against 'rawhide' during the Fedora 13 development cycle.
Changing version to '13'.
More information and reason for this action is here:
Feel free to reopen this bug if you are experiencing this issue with Fedora 14.