Bug 982934 - anaconda screws preexisting raid configuration and fails to install the OS
anaconda screws preexisting raid configuration and fails to install the OS
Status: CLOSED CANTFIX
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
19
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Anaconda Maintenance Team
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-10 03:35 EDT by QingLong
Modified: 2014-08-06 14:18 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-10 09:58:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description QingLong 2013-07-10 03:35:12 EDT
Description of problem:
Anaconda install scripts creates inadequate kickstart configuration despite correct one explicitly provided by the supplied ks.cfg
(automatic installation mode).

Version-Release number of selected component (if applicable):
19.30.13-1

How reproducible:
100%
Since Fedora 19 release announcement!
More than a week of headache...

Steps to Reproduce:
1. Get Pedora 19 x86_64 DVD.
2. Manually create a set of raid arrays (exist since previous installations).
3. Manually run mkfs (ext2, ext4) on those raid arrays
   (to get filesystems with the required parameters and options).
4. Create ks.cfg by hand with raid options like these:
      raid /boot --noformat --device=md2
      raid /     --noformat --device=md3
      raid /home --noformat --device=md5
   and all the other necessary stuff for the kickstart configuration.
5. Start the installation with the
      ks=hd:sdc1:/ks.cfg
   kernel command line parameter.
6. While waiting for that slowpoke installation run watch a bunch of
   systemd-udevd proccesses (looks like 1 per preexisting raid array)
   eating all the CPU time.
7. See md devices been assembled and stopped several times (what for?).
8. After a lot of time has been wasted have a look at /tmp/anaconda-tb-*
   and find out lines like
      KickstartValueError:The following problem occurred on line 22 of the kickstart file:
      No preexisting RAID device with the name "3" was found.
   and screwed raid kickstart options (a few lines below in the same file):
      raid /boot --device=2 --noformat --useexisting 
      raid / --device=3 --noformat --useexisting 
      raid /home --device=5 --noformat --useexisting 

Actual results:
Installation fails with idiotic error.
First of all, what a braindamaged lamer decided to add `--useexisting'?
And then what oligofren changes the raid device names?
At last what the fuck are those systemd-udevd's doing with all that CPU time?
BTW, why that dumbhead installation curve removes the `swap' keyword
from the `part swap' options?

Expected results:
Should do what user tells, anyway.
Or is this a product from the redmond faggots?

Additional info:
I have already tried to work around that imbecilia using:
1. Preinstallation script (%pre ... %end) to:
   --- Start all preexisting raid arrays.
   --- Create symbolic links from /dev/md/ to /dev/md[1-9]* like
            2 -> ../md2
            3 -> ../md3
            5 -> ../md5
   --- Generate /etc/mdadm.conf describing all the preexisting raid arrays:
       have tried two variants of mdadm.conf: using `devices' and `UUID'.
2. Using shell on the second virtual console (tty2) to manually control
   the active raid configuration and /etc/mdadm.conf correctness.
NO WAY!!!
Comment 1 QingLong 2013-07-10 10:28:34 EDT
 Samantha N. Bueno 2013-07-10 09:58:51 EDT:
>
> NEW → CLOSED → WONTFIX → NOTABUG
>                          ~~~~~~~ ORLY?   BULLSHIT!!!
 This project had been subverted by damned microsoft lamers a long time ago.

 quod erat demonstrandum

                        PEDORA MUST DIE!!!

Note You need to log in before you can comment on or make changes to this bug.