Bug 476818 - Fedora 10 dmraid: mkinitrd creates non working initrd
Fedora 10 dmraid: mkinitrd creates non working initrd
Status: CLOSED NEXTRELEASE
Product: Fedora
Classification: Fedora
Component: mkinitrd (Show other bugs)
10
x86_64 Linux
low Severity urgent
: ---
: ---
Assigned To: Hans de Goede
Fedora Extras Quality Assurance
:
: 476366 476546 485273 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-12-17 05:48 EST by bsquare
Modified: 2009-04-26 02:05 EDT (History)
21 users (show)

See Also:
Fixed In Version: 6.0.71-4.fc10
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-03-09 19:10:45 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
my grub.conf with an entry using UUID (1.04 KB, text/plain)
2008-12-17 05:48 EST, bsquare
no flags Details
/etc/fstab with mount point using UUID (commented there) (1.04 KB, text/plain)
2008-12-17 05:49 EST, bsquare
no flags Details
mkinitrd -f -v /boot/test.img 2.6.27.7-134.fc10.x86_64 &> /tmp/log (7.43 KB, text/plain)
2008-12-17 06:44 EST, bsquare
no flags Details
/sbin/dmsetup ls (432 bytes, application/octet-stream)
2009-01-06 07:38 EST, bsquare
no flags Details
Output of resolve_dm_name and get_numeric_dev (1.72 KB, text/plain)
2009-01-27 11:07 EST, bsquare
no flags Details
New experimental mkinitrd script (43.60 KB, text/plain)
2009-02-12 18:16 EST, Hans de Goede
no flags Details
mkinitrd-bashx.txt.bz2 (19.98 KB, application/octet-stream)
2009-02-13 11:54 EST, Alexander Holler
no flags Details
New new experimental mkinitrd script (43.60 KB, text/plain)
2009-02-13 12:43 EST, Hans de Goede
no flags Details
mkinitrd2-bashx.txt.bz2 (19.37 KB, application/octet-stream)
2009-02-13 13:08 EST, Alexander Holler
no flags Details
init (from the initrd created by mkinitrd) (1.80 KB, text/plain)
2009-02-13 16:49 EST, Alexander Holler
no flags Details

  None (edit)
Description bsquare 2008-12-17 05:48:19 EST
Created attachment 327222 [details]
my grub.conf with an entry using UUID

Description of problem:
Unable to boot with kernel-2.6.27.7-134.fc10.x86_64
There is very soon the error message :
"error monting /dev/root on /sysroot as ext3: no such file or dir".

Version-Release number of selected component (if applicable):
kernel-2.6.27.7-134.fc10.x86_64
mkinitrd-6.0.71-2.fc10.x86_64
and tested with mkinitrd-6.0.71-3.fc10.x86_64

How reproducible:
100%

Steps to Reproduce:
1.full upgrade from F9 -> F10 with yum
2.reboot with kernel-2.6.27.7-134.fc10.x86_64
3.
  
Actual results:
Unable to boot (but using a fc9 kernel).

Expected results:
Boot as usual.

Additional info:
Following the Bug 466534, and then Bug 470628, I've finally created this bug which should be different (although there is similarity).

Under my last F9 kernel running (which means 2.6.27.5-41.fc9.x86_64), I've
tried creating various version of initrd file:
 - using the patched packages from
http://kojipkgs.fedoraproject.org/packages/mkinitrd/6.0.71/3.fc10/
 - using the patched packages from updates-testing yum repository
 - with the two ones, I've tried with (and obviously updated my grub.conf):
    - mkinitrd -f --with=scsi_wait_scan
initrd-2.6.27.7-134.hacked.fc10.x86_64.img 2.6.27.7-134.fc10.x86_64
    - mkinitrd -f --with=scsi_wait_scan --with=ahci
initrd-2.6.27.7-134.hacked.fc10.x86_64.img 2.6.27.7-134.fc10.x86_64

But unfortunately it still does not work.

Before that I've tried using the UUID into grub and fstab to specify the devices but it still failed (Cf. attachements).
Comment 1 bsquare 2008-12-17 05:49:22 EST
Created attachment 327223 [details]
/etc/fstab with mount point using UUID (commented there)
Comment 2 bsquare 2008-12-17 05:55:25 EST
The Web system does not want me to add additional attachment (saying I've already fill the form for attachement XXX ...), so this is the output of blkid :
# blkid
/dev/mapper/isw_beadbdgjge_ARRAYp5: LABEL="/home" UUID="c8a1b935-e6b0-4dc8-a5cb-91da4223787d" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/isw_beadbdgjge_ARRAYp2: LABEL="/" UUID="0310acee-e028-4a29-95b4-709e16294dcb" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/isw_beadbdgjge_ARRAYp1: LABEL="/boot" UUID="1b36a811-c0c7-4cba-bc0a-78ae57307185" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/isw_beadbdgjge_ARRAYp3: TYPE="swap" LABEL="SWAP-isw_beadbd" UUID="41b65b98-2d19-4273-91e4-6f678bb4c511"
Comment 3 bsquare 2008-12-17 06:00:18 EST
Important information, my computer has 2 HDD, in RAID 0.
Comment 4 Hans de Goede 2008-12-17 06:03:00 EST
Thanks for this new bug and all the info.

Can you also run (in chroot /mnt/sysimage from rescue disk, or while booting an older kernel):
mkinitrd -f -v /boot/test.img <kernelver-release> &> log

And attach the resulting log file? Thanks.
Comment 5 Hans de Goede 2008-12-17 06:03:42 EST
(In reply to comment #3)
> Important information, my computer has 2 HDD, in RAID 0.

Is this raid handled by your BIOS (dmraid), or is this purely linux software raid (mdraid) ?
Comment 6 bsquare 2008-12-17 06:44:44 EST
Created attachment 327231 [details]
mkinitrd -f -v /boot/test.img 2.6.27.7-134.fc10.x86_64 &> /tmp/log
Comment 7 bsquare 2008-12-17 06:46:42 EST
(In reply to comment #5)
> (In reply to comment #3)
> > Important information, my computer has 2 HDD, in RAID 0.
> 
> Is this raid handled by your BIOS (dmraid), or is this purely linux software
> raid (mdraid) ?

The RAID should be handled by my BIOS (there is a RAID "page" at each boot).
How can I confirm it ?
# lsmod |grep -i
(gives nothing)

# lspci  |grep -i raid
00:1f.2 RAID bus controller: Intel Corporation 631xESB/632xESB SATA RAID Controller (rev 09)
Comment 8 Hans de Goede 2008-12-17 07:19:03 EST
Never mind. the last log file you've attached gives me an answer (it is dmraid) It also shows me what is going wrong (although not why).

I'll see if I can reproduce this.
Comment 9 bsquare 2008-12-17 07:26:24 EST
(In reply to comment #8)
> Never mind. the last log file you've attached gives me an answer (it is dmraid)
How do you make this conclusion ? (It's interesting to know)

> It also shows me what is going wrong (although not why).
Ok so what is the issue ?

> I'll see if I can reproduce this.
Great, let me know.
Comment 10 bsquare 2008-12-17 13:38:53 EST
Any news hdegoede ?
Comment 11 Hans de Goede 2008-12-17 14:02:48 EST
(In reply to comment #10)
> Any news hdegoede ?

I'm sorry I got sidetracked tracking down another mkinitrd bug, and I'm calling it a day for today. I'll get back on this tomorrow.
Comment 12 bsquare 2008-12-18 13:02:57 EST
(In reply to comment #11)
> I'm sorry I got sidetracked tracking down another mkinitrd bug, and I'm calling
> it a day for today. I'll get back on this tomorrow.
Have you find time to work on it ?
Comment 13 Hans de Goede 2008-12-29 06:09:29 EST
Hi,

I hope you've had a good Christmas!

After much fighting with kvm (GRRR) I finally managed to finish my attempts to reproduce this. Unfortunately I failed to reproduce this. But I have found some important clues.

Can you please boot the (working) F-9 kernel and then execute the following commands in a shell and paste the output here?, thanks!
/sbin/dmsetup ls
/sbin/dmraid -s -craidname
/sbin/dmsetup table

In the "mkinitrd -v" output of my test install there is this line:
'Adding dm map "sil_asaasashgsdh"'

Which is not in yours, this is the cause of your boot failure, now to find out why.
Comment 14 bsquare 2009-01-06 07:38:46 EST
Created attachment 328276 [details]
/sbin/dmsetup ls
Comment 15 bsquare 2009-01-06 07:43:19 EST
(In reply to comment #13)
> Hi,
> 
> I hope you've had a good Christmas!
Unfortunately it was not, but thx to ask for ;)
What about yours ?

> 
> After much fighting with kvm (GRRR) I finally managed to finish my attempts to
> reproduce this. Unfortunately I failed to reproduce this. But I have found some
> important clues.
Ok.

> 
> Can you please boot the (working) F-9 kernel and then execute the following
> commands in a shell and paste the output here?, thanks!
A new time, I cannot add more than one attachement :@
So this is the output of other commands :

> /sbin/dmsetup ls
Cf. attachement.

> /sbin/dmraid -s -craidname
isw_beadbdgjge_ARRAY

> /sbin/dmsetup table
isw_beadbdgjge_ARRAYp2: 0 81963630 linear 253:0 208845
isw_beadbdgjge_ARRAY: 0 976552448 striped 2 256 8:0 0 8:16 0
isw_beadbdgjge_ARRAYp1: 0 208782 linear 253:0 63
isw_beadbdgjge_ARRAYp5: 0 885984687 linear 253:0 90558468
isw_beadbdgjge_ARRAYp4: 0 885984750 linear 253:0 90558405
isw_beadbdgjge_ARRAYp3: 0 8385930 linear 253:0 82172475

> In the "mkinitrd -v" output of my test install there is this line:
> 'Adding dm map "sil_asaasashgsdh"'
> 
> Which is not in yours, this is the cause of your boot failure, now to find out
> why.
Let me know asap.

I'm going to upgrade to kernel-2.6.27.9-159.fc10.
I'll tell if something changes.
Comment 16 bsquare 2009-01-08 07:11:01 EST
Hi Hans de Goede,

Is there any news ?
Comment 17 waterboyharris 2009-01-10 06:32:04 EST
I am having the same problem and can help recreate it.  I have tried both yum upgrade and a fresh install with the same result.  I always disconnect any hard drive from the system.  My current flash drive is an OCZ 16GB DIESEL.  My partition scheme is this: 1. /boot 100M ext2  2. / 12292M xfs  3. swap 1024M swap  4. /share 1843M vfat.  Installation to the flash drive is the same as any other installation.  Reconnect hard drives and you now have a portable installation of Fedora 9.  This is where Fedora 10 chokes.  As long as there was no hard drive on the system Fedora 10 will boot normally.  With a hard drive in the system it will boot but cannot find the / partition.  I have a minimal understanding of the boot process, but it seems like a hard disk on the system is being called /dev/sda instead of the flash drive being /dev/sda.  I hope this will be helpful, I really want non-live Fedora 10 on my thumb drive.
Comment 18 bsquare 2009-01-18 06:27:24 EST
Any news ?
Comment 19 Hans de Goede 2009-01-18 17:03:02 EST
Sorry for not responding sooner.

Ok, I'm still not seeing / getting why this is not working for you, after booting the F-10 system with the working F-9 kernel, can you please execute (as root) the following 2 commands (from the same shell!) and paste the output?
. /etc/rc.d/init.d/functions
resolve_dm_name isw_beadbdgjge_ARRAY

And can you also execute the following and paste the output ?
echo dm list isw_beadbdgjge_ARRAY | nash --force --quiet

Thanks!
Comment 20 bsquare 2009-01-19 02:33:04 EST
(In reply to comment #19)
> Sorry for not responding sooner.
> 
> Ok, I'm still not seeing / getting why this is not working for you, after
> booting the F-10 system with the working F-9 kernel, can you please execute (as
> root) the following 2 commands (from the same shell!) and paste the output?
> . /etc/rc.d/init.d/functions
> resolve_dm_name isw_beadbdgjge_ARRAY
It returns nothing.
If you wish, I can look at the source code (I love (really) the GNU/Bash) and see where it goes wrong.
> 
> And can you also execute the following and paste the output ?
> echo dm list isw_beadbdgjge_ARRAY | nash --force --quiet
# echo dm list isw_beadbdgjge_ARRAY | nash --force --quiet
rmparts sdb
rmparts sda
create isw_beadbdgjge_ARRAY
part isw_beadbdgjge_ARRAY

> 
> Thanks!
Comment 21 Hans de Goede 2009-01-19 03:00:10 EST
(In reply to comment #20)
> (In reply to comment #19)
> > Sorry for not responding sooner.
> > 
> > Ok, I'm still not seeing / getting why this is not working for you, after
> > booting the F-10 system with the working F-9 kernel, can you please execute (as
> > root) the following 2 commands (from the same shell!) and paste the output?
> > . /etc/rc.d/init.d/functions
> > resolve_dm_name isw_beadbdgjge_ARRAY
> It returns nothing.

Ah, that is not good, that is the cause of your problem!

> If you wish, I can look at the source code (I love (really) the GNU/Bash) and
> see where it goes wrong.

Yes please, although I must warn you that bash function is rather convoluted, to be honest I'm not completely sure what exactly it is supposed to do. In my case it just returns the name passed in to it.
Comment 22 Andreas Piesk 2009-01-20 08:00:18 EST
i hit the same problem in RHEL5.2 and found out what's going wrong.

dmraid -ay -t returns the devices in a different order as dmsetup does:

# dmraid -ay -t
isw_dhhcfhgeah_system: 0 488390392 mirror core 3 131072 nosync block_on_error 2 /dev/sdc 0 /dev/sdb 0

dmraid returns sdc first.

# ls -l /dev/sdb /dev/sdc
brw-r----- 1 root disk 8, 16 Jan 19 22:45 /dev/sdb
brw-r----- 1 root disk 8, 32 Jan 19 22:45 /dev/sdc

# dmsetup table | grep isw_dhhcfhgeah_system
isw_dhhcfhgeah_system: 0 488390392 mirror core 3 131072 nosync block_on_error 2 8:16 0 8:32 0

but dmsetup returns 8:16 == sdb first.

that's why the sed expression in resolve_dm_name() doesn'n match and the function returns nothing.

i tried dmraid-1.0.0rc15 but it didn't work:

# ./dmraid -d -ay -t
ERROR: isw: Could not find disk /dev/sdb in the metadata
ERROR: isw: Could not find disk /dev/sdc in the metadata
no raid disks

no debug output at all.
Comment 23 Andreas Piesk 2009-01-20 09:32:45 EST
i tried to work around the problem by creating the raid set with dmraid instead using the BIOS:

dmraid -f isw -C system --type 1 --disks "/dev/sdb,/dev/sdc"

the creation itself worked but now dmsetup reports sdc,sdb. it seems dmraid creates the dm map always in the opposite order.

i think, i'm going back to mdadm.
Comment 24 Christian Jose 2009-01-22 17:19:17 EST
There is a possibility that this bug is the same as the one I raised yesterday: 481037

I'll try and find a minute to run the above checks and post the results.
Comment 25 bsquare 2009-01-23 02:11:44 EST
(In reply to comment #22)
> i hit the same problem in RHEL5.2 and found out what's going wrong.
> 
> dmraid -ay -t returns the devices in a different order as dmsetup does:
> 
> # dmraid -ay -t
> isw_dhhcfhgeah_system: 0 488390392 mirror core 3 131072 nosync block_on_error 2
> /dev/sdc 0 /dev/sdb 0

> # dmsetup table | grep isw_dhhcfhgeah_system
> isw_dhhcfhgeah_system: 0 488390392 mirror core 3 131072 nosync block_on_error 2

It's well the same issue than mine:
# dmraid -ay -t
isw_beadbdgjge_ARRAY: 0 976551424 striped 2 256 /dev/sda 0 /dev/sdb 0
isw_beadbdgjge_ARRAYp1: 0 208782 linear /dev/mapper/isw_beadbdgjge_ARRAY 63
isw_beadbdgjge_ARRAYp2: 0 81963630 linear /dev/mapper/isw_beadbdgjge_ARRAY 208845
isw_beadbdgjge_ARRAYp3: 0 8385930 linear /dev/mapper/isw_beadbdgjge_ARRAY 82172475
isw_beadbdgjge_ARRAYp5: 0 885984687 linear /dev/mapper/isw_beadbdgjge_ARRAY 90558468

# dmsetup table
isw_beadbdgjge_ARRAYp2: 0 81963630 linear 253:0 208845
isw_beadbdgjge_ARRAY: 0 976552448 striped 2 256 8:0 0 8:16 0
isw_beadbdgjge_ARRAYp1: 0 208782 linear 253:0 63
isw_beadbdgjge_ARRAYp5: 0 885984687 linear 253:0 90558468
isw_beadbdgjge_ARRAYp4: 0 885984750 linear 253:0 90558405
isw_beadbdgjge_ARRAYp3: 0 8385930 linear 253:0 82172475

The order given by dmraid and dmsetup are not the same.
Comment 26 Hans de Goede 2009-01-23 02:33:10 EST
(In reply to comment #25)
> (In reply to comment #22)
> > i hit the same problem in RHEL5.2 and found out what's going wrong.
> > 
> > dmraid -ay -t returns the devices in a different order as dmsetup does:
> > 
> > # dmraid -ay -t
> > isw_dhhcfhgeah_system: 0 488390392 mirror core 3 131072 nosync block_on_error 2
> > /dev/sdc 0 /dev/sdb 0
> 
> > # dmsetup table | grep isw_dhhcfhgeah_system
> > isw_dhhcfhgeah_system: 0 488390392 mirror core 3 131072 nosync block_on_error 2
> 
> It's well the same issue than mine:
> # dmraid -ay -t
> isw_beadbdgjge_ARRAY: 0 976551424 striped 2 256 /dev/sda 0 /dev/sdb 0
> isw_beadbdgjge_ARRAYp1: 0 208782 linear /dev/mapper/isw_beadbdgjge_ARRAY 63
> isw_beadbdgjge_ARRAYp2: 0 81963630 linear /dev/mapper/isw_beadbdgjge_ARRAY
> 208845
> isw_beadbdgjge_ARRAYp3: 0 8385930 linear /dev/mapper/isw_beadbdgjge_ARRAY
> 82172475
> isw_beadbdgjge_ARRAYp5: 0 885984687 linear /dev/mapper/isw_beadbdgjge_ARRAY
> 90558468
> 
> # dmsetup table
> isw_beadbdgjge_ARRAYp2: 0 81963630 linear 253:0 208845
> isw_beadbdgjge_ARRAY: 0 976552448 striped 2 256 8:0 0 8:16 0
> isw_beadbdgjge_ARRAYp1: 0 208782 linear 253:0 63
> isw_beadbdgjge_ARRAYp5: 0 885984687 linear 253:0 90558468
> isw_beadbdgjge_ARRAYp4: 0 885984750 linear 253:0 90558405
> isw_beadbdgjge_ARRAYp3: 0 8385930 linear 253:0 82172475
> 
> The order given by dmraid and dmsetup are not the same.

No, its not about the order in which they list the partitions, it is in the order of the disks within the set, and they agree on that. So you are seeing a different bug.
Comment 27 Andreas Piesk 2009-01-23 06:20:03 EST
so this is not a mkinitrd issue, right?
should i open a new bug for component dmraid or could this bug moved/assigned to dmraid?
Comment 28 bsquare 2009-01-27 11:07:29 EST
Created attachment 330103 [details]
Output of resolve_dm_name and get_numeric_dev
Comment 29 bsquare 2009-01-27 11:13:16 EST
(In reply to comment #21)
> (In reply to comment #20)
> > (In reply to comment #19)
> > > Sorry for not responding sooner.
> > > 
> > > Ok, I'm still not seeing / getting why this is not working for you, after
> > > booting the F-10 system with the working F-9 kernel, can you please execute (as
> > > root) the following 2 commands (from the same shell!) and paste the output?
> > > . /etc/rc.d/init.d/functions
> > > resolve_dm_name isw_beadbdgjge_ARRAY
> > It returns nothing.
> 
> Ah, that is not good, that is the cause of your problem!

I've done some other tests to define what happens.
To be more confortable, I've extracted the get_numeric_dev and resolve_dm_name functions to tests them in debug mode (bash -x).
You can see the result into the last attachment.

In fact all seems ok until the last instruction.
# /sbin/dmraid -ay -t --ignorelocking -> ok
# majmin -> ok for /dev/sda and /dev/sdb
# after the "for" the worked line is -> "0 976551424 striped 2 256 8:0 0 8:16 0"

Then the following instructions does "nothing" because the patterns don't match.
Comment 30 bsquare 2009-01-31 11:59:01 EST
(In reply to comment #26)
Any news with those additional informations ?
Comment 31 Hans de Goede 2009-02-03 05:56:40 EST
*** Bug 476366 has been marked as a duplicate of this bug. ***
Comment 32 Hans de Goede 2009-02-03 07:47:20 EST
*** Bug 476546 has been marked as a duplicate of this bug. ***
Comment 33 Hans de Goede 2009-02-03 08:06:18 EST
Hello world (aka all in the CC list)

First of all apologies for not responding to this bug for so long, we've been
swamped with other stuff. We understand this is a rather critical bug.

I've prepared a mkinitrd build, which I believe fixes this, you can find it
here:
http://koji.fedoraproject.org/koji/taskinfo?taskID=1101418

To give this version a try, click on the build for your architecture and download the following files:
mkinitrd-6.0.71-4.fc10.XXXX.rpm
nash-6.0.71-4.fc10.XXXX.rpm
libbdevid-python-6.0.71-4.fc10.XXXX.rpm

And then as root do:

KVER=######-####
rpm -Uvh mkinitrd-6.0.71-4.fc10.*.rpm nash-6.0.71-4.fc10.*.rpm \
  libbdevid-python-6.0.71-4.fc10.*.rpm
mv /boot/initrd-$(KVER).img /boot/initrd-$(KVER).img.old
mkinitrd /boot/initrd-$(KVER).img $(KVER)

Where KVER should be set to the kernel version-release for the non-bootable
kernel you are trying to fix. This is the string between "initrd-" and ".img"
when you do a:
ls /boot

You can find the version-release string for all installed kernels by doing:
rpm -q --qf "%{VERSION}-%{RELEASE}\n" kernel

This will print the full name-version-release.arch for each installed kernel
followed by the version-release.

If you do not have any bootable kernel at all, you can try to fix this by
booting the install CD in to rescue mode and then do:
chroot /mnt/sysimage
Before exucuting the commands above.

Please let us know if this fixes things for you.
Comment 34 Will Dunn 2009-02-03 09:40:38 EST
In reply to Comment #33:
Just booted up with kernel 2.6.27.12-170.2.5.fc10.x86_64 after following the above on an ICH9R RAID machine where the kernel wouldn't boot before - I'd followed all the above posts and checked that I had the same output as bsquare, so I was definitely seeing the same bug.

Many thanks for all your work. Just say if you'd like me to test anything on my system.
Comment 35 Alexandre Silva Lopes 2009-02-04 04:23:48 EST
Hi,

Doesn't fix my problem too. Dmraid related lines still missing from initrd/init file.

Thanks for your effort.
Comment 36 Will Dunn 2009-02-04 10:58:23 EST
Just thought I'd clarify that the fix *does* work on my system.
Comment 37 Dmitry Burstein 2009-02-05 17:58:18 EST
Thanks a lot! Did work on my ICH5R.
Comment 38 bsquare 2009-02-09 07:52:36 EST
Thank you Hans de Goede, It's fixed my issue.
I'm now using the 2.6.27.12-170.2.5.fc10.x86_64 kernel version.
Comment 39 Chad Merritt 2009-02-09 18:57:12 EST
I was able to finally get this working on my nvidia based box.  I had to install the system fresh (which is what I was planning the whole time).  Before the first boot I went into rescue mode and completed the steps of adding the rpms and the new initrd.

I was then able to boot normally with df the mapped drives mounted.  That was with 2.6.27.5-117.  After that, a yum update worked perfectly upgrading me to 2.67.27.12-170.

Thanks for the hard work Hans, now I can stop using my laptop as a desktop!
Comment 40 Li Qi 2009-02-11 13:15:04 EST
Thanks a lot. This really resolved my problems with ICH8R. I could now using the Fedora 10 with a kernel 2.6.27.5-117.fc10.i686.PAE in RAID0 ICH8R. I will upgrade it to the most recent kernel very soon.
Comment 41 Hans de Goede 2009-02-12 06:29:13 EST
Hi all,

Given the number of success reports I'm going to do an official mkinitrd update for this for F-10. It will start in updates-testing first.

For those who have not yet tried the updated mkinitrd I provided here, the update system will add a comment explaining how to install the update on your system, then you need to regenerate your initrd like this:

KVER=######-####
mv /boot/initrd-$(KVER).img /boot/initrd-$(KVER).img.old
mkinitrd /boot/initrd-$(KVER).img $(KVER)

Where KVER should be set to the kernel version-release for the non-bootable
kernel you are trying to fix. This is the string between "initrd-" and ".img"
when you do a:
ls /boot

You can find the version-release string for all installed kernels by doing:
rpm -q --qf "%{VERSION}-%{RELEASE}\n" kernel

This will print the full name-version-release.arch for each installed kernel
followed by the version-release.

If you do not have any bootable kernel at all, you can try to fix this by
booting the install CD in to rescue mode and then do:
chroot /mnt/sysimage
<install mkinitrd update>
<execute the commands above>
Comment 42 Hans de Goede 2009-02-12 13:15:00 EST
*** Bug 485273 has been marked as a duplicate of this bug. ***
Comment 43 Alexander Holler 2009-02-12 15:41:12 EST
Still no luck. I'm now having a initrd with dmraid, but when i'm booting dm-mapper fails with "reload ioctl failed, table ioctl failed" and afterwards nash segfaults in nashDmDevGetName.

The two lines in the init of the initrd are:
dm create via_bdaibgjjch --uuid DMRAID-via_bdaibgjjch 0 312581807 mirror core 2 131072 nosync 2 8:32 0 8:16 0 1 handle_errors
dm partadd via_bdaibgjjch

root is installed on /dev/mapper/via_bdaibgjjchp11 (ext4) which is sdb11/sdc11 when booted with the fc10-kde-x86_64-livecd.
Comment 44 Hans de Goede 2009-02-12 18:16:33 EST
Created attachment 331781 [details]
New experimental mkinitrd script

(In reply to comment #43)
> Still no luck. I'm now having a initrd with dmraid, but when i'm booting
> dm-mapper fails with "reload ioctl failed, table ioctl failed" and afterwards
> nash segfaults in nashDmDevGetName.
> 
> The two lines in the init of the initrd are:
> dm create via_bdaibgjjch --uuid DMRAID-via_bdaibgjjch 0 312581807 mirror core 2
> 131072 nosync 2 8:32 0 8:16 0 1 handle_errors
> dm partadd via_bdaibgjjch
> 
> root is installed on /dev/mapper/via_bdaibgjjchp11 (ext4) which is sdb11/sdc11
> when booted with the fc10-kde-x86_64-livecd.

Hmm suck, I'm working on a different way of handling dmraid in mkinitrd in rawhide, which will probably fix this. I've attached the new mkinitrd script itself, before you use this make sure you have nash-6.0.71-4 installed, and update your dmraid package to this version (or newer):
http://koji.fedoraproject.org/koji/buildinfo?buildID=82481
Comment 45 Alexander Holler 2009-02-12 22:17:36 EST
That doesn't work, at least not inside a chroot (where /dev, /proc and /sys is mounted into):

[root@localhost /]# ./mkinitrd.experimental /tmp/initrd-2.6.27.12-170.2.5.fc10.x86_64.img 2.6.27.12-170.2.5.fc10.x86_64                                        
WARNING: /sys/block/sda//sda11 is a not a block sysfs path, skipping           

The warning is wrong:

[root@localhost /]# find /sys/devices -name 'sda11'
/sys/devices/pci0000:00/0000:00:0f.0/host0/target0:0:0/0:0:0:0/block/sda/sda11

And the resulting initramfs is still without dmraid.
Comment 46 Alexander Holler 2009-02-12 22:56:59 EST
Exists there any documentation about the buildin command dm of nash?
Comment 47 Rene Linde 2009-02-13 03:21:10 EST
Thanks to Hans and his team!

In reply to comment #41
I'm very happy with a running F10 with kernel version 2.6.27.12-170.2.5.fc10.x86_64 on ICH8R Onboard Software RAID-Controller.
The RAID is RAID1 System. I updated in rescue mode the packages mkinitrd and nash form updates-testing repository. After creating the new RAMDISK image my system boots up F10 from the RAID1 and no from sda or sdb like before.
Comment 48 Li Qi 2009-02-13 04:12:21 EST
I have found some problems if you try to resolve the initrd problem with comment #41 or comment #33. That's when you install F10 in ICH5R RAID0 and split the partition for "/" into several different partitions, for example one for "/" and another for "/usr", kernel could not find RAID0 or rootfs at all. And at the same time you couldn't use ext4 (support is a module in initrd), only ext3 ( support is build-in kernel) can work.

I have installed F10 in my home-desktop with ICH8R as comment #33, and it works perfectly. Thanks again. But when I try the same ways to resolve the problem with ICH5R in my work-desktop, it doesn't work very good. So I have to use only one partition for "/", and change ext4 to ext3. After all these, F10 can work at last.
Comment 49 Hans de Goede 2009-02-13 06:35:12 EST
(In reply to comment #45)
> That doesn't work, at least not inside a chroot (where /dev, /proc and /sys is
> mounted into):
> 
> [root@localhost /]# ./mkinitrd.experimental
> /tmp/initrd-2.6.27.12-170.2.5.fc10.x86_64.img 2.6.27.12-170.2.5.fc10.x86_64     
> WARNING: /sys/block/sda//sda11 is a not a block sysfs path, skipping           
> 

Oh, I forgot about that issue, sorry. This is not happening in rawhide due to a nash fix removing the // in the path (replacing it by a single /)

You can do things to fix this:
1) In mkinitrd.experimental replace line 282:
        dev=$(for x in /sys/block/* ; do findall $x/ -name dev ; done | while read device ; do \
   With:
        dev=$(for x in /sys/block/* ; do findall $x -name dev ; done | while read device ; do \

So basicly in that line replace "$x/" with "$x"

2) Update to rawhide's nash, which you can find here:
http://koji.fedoraproject.org/koji/buildinfo?buildID=82256

I think 1 is the easiest. Thanks for testing and sorry for this.
Comment 50 Alexander Holler 2009-02-13 10:36:09 EST
Changing only the script doesn't work. This builds an initramfs with dmraid, but nash segfaults as described in comment 43. Rawhide's nash would need python 2.6.
The versions I used to build the segfaulting initramfs are now

[root@localhost /]# rpm -qa mkinitrd nash dmraid libbdevid-pythonlibbdevid-python-6.0.71-4.fc10.x86_64
nash-6.0.71-4.fc10.x86_64
dmraid-1.0.0.rc15-4.fc11.x86_64
mkinitrd-6.0.71-4.fc10.x86_64

and the (modified) experimental mkinitrd.
Comment 51 Hans de Goede 2009-02-13 10:49:41 EST
Alexander you used the (patched) mkinitrd and you are still getting a segfault ?

That one should create an initrd with an init file which does not contains any dm .... lines, but instead a single "dmraid -a y --rm_partitions XXXXXXX" line.

Are you sure you are using the right mkinitrd and the right initrd? Can you please extract the init file from the initrd generated by the experimental mkinitrd, like this:
zcat /boot/initrd-XXXXXXX.img | cpio -i init

And then attach the resulting init file ?
Comment 52 Alexander Holler 2009-02-13 11:23:01 EST
Sorry, by accident I have used the normal mkinitrd. But the patched experimental mkinitrd creates an initramfs without dmraid and outputs the following:

Creating initramfs                                                              
Looking for driver for /dev/sdb11 in /sys/block/sdb//sdb11                      
WARNING: /sys/block/sdb//sdb11 is a not a block sysfs path, skipping            
Using modules:  ext4                                                            
...

I think it is not necessary to attach the wrong init, as it doesn't contain any lines calling dm.
Comment 53 Alexander Holler 2009-02-13 11:54:17 EST
Created attachment 331844 [details]
mkinitrd-bashx.txt.bz2

I've attached the output of bash -x mkinitrd.experimental -v /tmp/initrd-2.6.27.12-170.2.5.fc10.x86_64.img 2.6.27.12-170.2.5.fc10
Comment 54 Hans de Goede 2009-02-13 12:43:49 EST
Created attachment 331850 [details]
New new experimental mkinitrd script

(In reply to comment #53)
> Created an attachment (id=331844) [details]
> mkinitrd-bashx.txt.bz2
> 
> I've attached the output of bash -x mkinitrd.experimental -v
> /tmp/initrd-2.6.27.12-170.2.5.fc10.x86_64.img 2.6.27.12-170.2.5.fc10

Hmm, it seems the new mkinitrd script was dependend up on some sysfs structures only present in 2.6.29, this new version fixes that.

Please try again, and many thanks for your patience!
Comment 55 Alexander Holler 2009-02-13 13:08:35 EST
Created attachment 331852 [details]
mkinitrd2-bashx.txt.bz2

This solves the warning, but still doesn't detect the raid. Output of bash -x attached.
Comment 56 Hans de Goede 2009-02-13 13:28:49 EST
(In reply to comment #55)
> Created an attachment (id=331852) [details]
> mkinitrd2-bashx.txt.bz2
> 
> This solves the warning, but still doesn't detect the raid. Output of bash -x
> attached.

Ah, interesting, are you running mkinitrd from the rescue mode of anaconda, or are you booted into the system using an older initrd, or ... ?
Comment 57 Alexander Holler 2009-02-13 16:49:08 EST
Created attachment 331877 [details]
init (from the initrd created by mkinitrd)

First thanks the help, I've got my first Fedora-installation finally booted. I'll recap:

One of the problems was that I've used the kde-x86_64-live-cd as rescue-system, because I've installed F-10 using a netboot to get the system onto the (already created) ext4 on the via-fakeraid.
This (rescue-)system hasn't detected the raid by it's own so I've used dmraid -ay. The next problem was, that the rescue-system needed a newer dmraid (see below for the version) and I had to call dmraid -an and dmraid -ay --rm_partitions.
The parameter --rm_partitions was needed to remove the plain devices (sdbN, sdcN) used by the raid, otherwise mkinitrd doesn't detect the raid-partitions correct.

Then I had to update the installed system (accessed using chroot into the mounted raid-partiton with /dev, /proc and /sys mounted into it) with the following packages:
[aholler@krabat ~]$ rpm -qa mkinitrd dmraid nash libbdevid-python
dmraid-1.0.0.rc15-5.fc11.x86_64
libbdevid-python-6.0.71-4.fc10.x86_64
nash-6.0.71-4.fc10.x86_64
mkinitrd-6.0.71-4.fc10.x86_64

Finally I've called the second experimental mkinitrd posted in the attachments to create the initramfs.

That's all what was needed to boot the freshly installed Fedora 10 ;)
Comment 58 Adam Huffman 2009-02-23 12:51:13 EST
Having just upgraded a Dell Precision 390 to F10, I've had to refer to what's in this bugreport.  After the upgrade (using preupgrade), the system came up at the bare grub prompt.  Having used a combination of the updates.img with anaconda and those packages to fix the initrd creation, I managed to get the system working again with the RAID1 array.  Thanks for including all these details.  This really needs to be fixed for F11.
Comment 59 Fedora Update System 2009-03-09 19:10:23 EDT
mkinitrd-6.0.71-4.fc10 has been pushed to the Fedora 10 stable repository.  If problems still persist, please make note of it in this bug report.

Note You need to log in before you can comment on or make changes to this bug.