This may be an anaconda problem but will start with preupgrade. It probably is both and may be two bugs. Please feel free to let me know if I should seperate this report. For the record the anaconda is 11.5.0.44 When I attempt a preupgrade on one server I get an error. The error is Error Downloading Kick Start file. Here is the volume structure. Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 147786584 7175316 133104100 6% / /dev/sda2 6048352 143236 5597876 3% /temp tmpfs 484312 0 484312 0% /dev/shm I will also include the grub.conf as an attachment. It then says I can cancel to run interactively. However, the interactive version insists upon installing a new copy of linux. It can't seem to find my existing copy. As I said this may be a separate bug and if so let me know and I will file a separate bug report.
Created attachment 340061 [details] the grub conf which is supposed to start the upgrade.
This is still failing. It does have a slightly different presentation. That presentation does appear to provide additional information. It now comes to a point where it asks for you to chose an install drive, and only has the option of the actual drives in the system. Last time it had the raid volume as an option. This time the raid volume doesn't exist. Apparently this new version of the OS is dropping support for upgrading software RAID volumes. Please advise. My raid volume is basically a RAID 1 of sda and sdb
There are known bugs in anaconda's handling of mdraid devices. Nobody has told me about any plans to remove support for it. Today's rawhide has some mdraid fixes in anaconda (11.5.0.47) which may help here. I'll attempt to set up a reproducer as soon as I can.
Well it is goo to hear this. I do use md stuff quite extensively and would be glad to test things. I would also be glad to try and get you any better diagnostics on this and other issues. I actually have a number of bugs open on software raid stuff. Anaconda is not the only source of issues here :-( I would love to try the 11.5.0.47, I have been requested to do this on a couple other bugs. The problem is that I am not sure exactly how to do this. That I can tell I should use an update.img file, which I am very conversant with. But I am not sure where I would find the right file. I spent 4 hours last night trying to find this without success. If you can point me in the right direction I would happy to put some time into trying to help with this.
I find it interesting that preupgrade is still using 11.5.0.38. We need to find a way to test the current versions.
Also I am wondering is the problem that the kickstart parameter is right and anaconda just can't process it, or is the parameter wrong? The parameter I end up with for the drive seems to be a LVM volume name when I am not using LVM. Should I change this to /dev/md0 or something?
If your system is booting 11.5.0.38, you either have a wildly out-of-date mirror or you just need to re-run preupgrade to fetch newer install images. anaconda's first stage can't seem to mount RAID devices at the moment - or any hard drive devices, actually. I'm investigating that. In the meantime, if you have a wired network connection you could easily complete the upgrade by removing ks=XXX and replacing stage2 with a http:// url for the install.img.
OK this problem still exists as of 11-Preview B. Using the comment 7 method does not fix the problem. Well it does let me do an upgrade. But when I am done doing the upgrade I am left with an unbootable machine. I had to go into rescue mode. and do a grub-install to get a bootable machine. I am wondering if this (B) should be a seperate bug report, but I figured I would report it here first. If you ask me to, or if I have not heard anything about this by Monday night I will fill a seperate bug report on this probably seperate problem.
Definitely a different problem. If you got nothing but the word 'GRUB' onscreen when the system tried to reboot, that's some manifestation of bug 450143. Otherwise, please file a new bug against anaconda, and be sure to include a detailed description of what the system did when it tried (and failed) to boot, what you did to fix it, and a copy of your grub.conf. As for the original problem here - preupgrade 1.1.x has dropped support for using /boot on RAID, so this problem should not occur again.
Interesting. Has support for RAID been totally dropped, or just for /boot? Where do we find a list of the configuration and options that we had dropped support for? Is there any information on work arounds for these problems? Also is support for boot on raid only been dropped for preupgrade or also for a cdrom based upgrade. How about a message telling us that preupgrade will not work, instead of saying that it worked and then leaving a messed up machine. That would seem to be the incorrect way to drop support for a supported configuration.
(In reply to comment #10) > Interesting. Has support for RAID been totally dropped, or just for /boot? Just for /boot. We need a bare partition to store the installer runtime (install.img, aka "stage2") and the kickstart file. Preupgrade *used* to allow the kickstart to be on a RAID1 partition - this works because you can mount just one member of the RAID set, copy the kickstart file to /tmp, and then unmount the device and reassemble the RAID device normally for the upgrade. It *doesn't* work for stage2, because we need to keep the device mounted to keep the stage2 image mounted. So you can't reassemble the RAID set and you can't perform the upgrade without breaking the RAID. > Where do we find a list of the configuration and options that we had dropped > support for? Is there any information on work arounds for these problems? The workaround is right there in the error dialog - use a wired network connection and fetch stage2.img over the internet at boot time. > Also is support for boot on raid only been dropped for preupgrade or also for a > cdrom based upgrade. No, that will work fine because stage2 is on the CD. > How about a message telling us that preupgrade will not > work, instead of saying that it worked and then leaving a messed up machine. That's what's been done in preupgrade-1.1.0pre3. Sorry you hit the problem before the code changed to disallow this setup.