Red Hat Bugzilla – Bug 982934
anaconda screws preexisting raid configuration and fails to install the OS
Last modified: 2014-08-06 14:18:25 EDT
Description of problem:
Anaconda install scripts creates inadequate kickstart configuration despite correct one explicitly provided by the supplied ks.cfg
(automatic installation mode).
Version-Release number of selected component (if applicable):
Since Fedora 19 release announcement!
More than a week of headache...
Steps to Reproduce:
1. Get Pedora 19 x86_64 DVD.
2. Manually create a set of raid arrays (exist since previous installations).
3. Manually run mkfs (ext2, ext4) on those raid arrays
(to get filesystems with the required parameters and options).
4. Create ks.cfg by hand with raid options like these:
raid /boot --noformat --device=md2
raid / --noformat --device=md3
raid /home --noformat --device=md5
and all the other necessary stuff for the kickstart configuration.
5. Start the installation with the
kernel command line parameter.
6. While waiting for that slowpoke installation run watch a bunch of
systemd-udevd proccesses (looks like 1 per preexisting raid array)
eating all the CPU time.
7. See md devices been assembled and stopped several times (what for?).
8. After a lot of time has been wasted have a look at /tmp/anaconda-tb-*
and find out lines like
KickstartValueError:The following problem occurred on line 22 of the kickstart file:
No preexisting RAID device with the name "3" was found.
and screwed raid kickstart options (a few lines below in the same file):
raid /boot --device=2 --noformat --useexisting
raid / --device=3 --noformat --useexisting
raid /home --device=5 --noformat --useexisting
Installation fails with idiotic error.
First of all, what a braindamaged lamer decided to add `--useexisting'?
And then what oligofren changes the raid device names?
At last what the fuck are those systemd-udevd's doing with all that CPU time?
BTW, why that dumbhead installation curve removes the `swap' keyword
from the `part swap' options?
Should do what user tells, anyway.
Or is this a product from the redmond faggots?
I have already tried to work around that imbecilia using:
1. Preinstallation script (%pre ... %end) to:
--- Start all preexisting raid arrays.
--- Create symbolic links from /dev/md/ to /dev/md[1-9]* like
2 -> ../md2
3 -> ../md3
5 -> ../md5
--- Generate /etc/mdadm.conf describing all the preexisting raid arrays:
have tried two variants of mdadm.conf: using `devices' and `UUID'.
2. Using shell on the second virtual console (tty2) to manually control
the active raid configuration and /etc/mdadm.conf correctness.
Samantha N. Bueno 2013-07-10 09:58:51 EDT:
> NEW → CLOSED → WONTFIX → NOTABUG
> ~~~~~~~ ORLY? BULLSHIT!!!
This project had been subverted by damned microsoft lamers a long time ago.
quod erat demonstrandum
PEDORA MUST DIE!!!