Red Hat Bugzilla – Bug 202416
2.6.17-1.2174_FC5smp crash with RAID controller
Last modified: 2008-01-05 18:14:19 EST
Description of problem:
The latest kernel, 2.6.17-1.2174_FC5smp, fails to load on a computer with a RAID
controller. It hangs after the "Red Hat nash version 5.0.32 starting" message.
Version-Release number of selected component (if applicable):
Kernel 2.6.17-1.2174_FC5smp (and probably kernel 2.6.17-1.2174_FC5)
Always occurs on bootup with kernel 2.6.17-1.2174_FC5smp
Steps to Reproduce:
1. Boot using kernel 2.6.17-1.2174_FC5smp on a RAID computer
Not sure offhand what the motherboard is. The RAID controller in the server is
Promise FastTrak TX2 (set up to mirror one drive to another). The system boots
fine with the kernel originally installed with FC5, but fails with kernel
2.6.17-1.2157_FC5smp and this kernel too.
I have a problem with the same symptom:
kernel-2.6.17-1.2157_FC5 and 2.6.17-1.2174_FC5
hang after the "Red Hat nash version 5.0.32 starting" message.
The original 2.6.15-1.2054_FC5 works fine. I use x86_64 SMP
on Iwill DK8X with the MB's silk SATA controller.
I have had FC3 working fine before, and I put FC5 without any problems
(fresh install, only reused the FC3 existing partitions).
Perhaps, that may be exactly the problem that in FC3 and now on FC5 I have
Filesystem 1K-blocks Used Available Use% Mounted on
68212320 6931096 57760304 11% /
/dev/sdb1 101086 10540 85327 11% /boot
I colleague of mine uses the latest kernel on a similar system, but
he does not have /dev/mapper/VolGroup00-LogVol00 and it runs fine.
A number of people reported a simial problem, i.e. freezing after
"Red Hat nash version 5.0.32 starting" and some of them specificaly
mentioned having /dev/mapper/VolGroup00-LogVol00
Been having the same problem with all kernels post 2.6.16-1.2133 :(
I think I might have hit the same bug.
I installed FC5 (with problems, see
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=202416) on Asus A7v333 that
has old Promise controller.
First install running the RAID as RAID 0, it will not boot. It'll stop exactly
at "Red Hat nash version 5.0.32 starting". Setting it to RAID 1, it'll install
and boot fine. Upgrading it to 2.6.17-1.2174 (not SMP), "Red Hat nash version
5.0.32 starting" rears its ugly face again.
I guess the trick might be not to use LVM or any RAID at all to make the 2.6.17
I don't think that is an option for people who are using FC5 as a server.
This is apparently the same bug as
which is still not resoloved.
A new kernel update has been released (Version: 2.6.18-1.2200.fc5)
based upon a new upstream kernel release.
Please retest against this new kernel, as a large number of patches
go into each upstream release, possibly including changes that
may address this problem.
This bug has been placed in NEEDINFO state.
Due to the large volume of inactive bugs in bugzilla, if this bug is
still in this state in two weeks time, it will be closed.
Should this bug still be relevant after this period, the reporter
can reopen the bug at any time. Any other users on the Cc: list
of this bug can request that the bug be reopened by adding a
comment to the bug.
In the last few updates, some users upgrading from FC4->FC5
have reported that installing a kernel update has left their
systems unbootable. If you have been affected by this problem
please check you only have one version of device-mapper & lvm2
installed. See bug 207474 for further details.
If this bug is a problem preventing you from installing the
release this version is filed against, please see bug 169613.
If this bug has been fixed, but you are now experiencing a different
problem, please file a separate bug for the new problem.