Bug 134177 - panic after installing to a Smart Array
Summary: panic after installing to a Smart Array
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel
Version: 4.0
Hardware: ia64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Tom Coughlan
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2004-09-29 23:38 UTC by Tim Chambers
Modified: 2007-11-30 22:07 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-01-14 14:42:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Tim Chambers 2004-09-29 23:38:47 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux ia64; en-US; rv:1.7.3)
Gecko/20040924 Debian/1.7.3-2 StumbleUpon/1.995

Description of problem:
Installation on HP rx2600 with HP Smart Array 5302-128 Controller went
alright. On reboot, this was the spew:

ELILO boot: Uncompressing Linux... done
Loading initrd initrd-2.6.8-1.528.2.10.img...done
alloc 0x3b0-0x3df from PCI IO for PCI Bus 0000:e0 failed
audit(1096477925.004:0): initialized
8042.c: i8042 controller self test timeout.
Red Hat nash version 4.1.11 starting
  Reading all physical volumes.  This may take a while...
  No volume groups found
  No volume groups found
  No volume groups found
mount: error 2 mounting ext3
mount: error 2 mounting none
switchroot: mount failed: 22
Kernel panic: Attempted to kill init!
umount /initrd/dev failed: 2


Version-Release number of selected component (if applicable):


How reproducible:
Didn't try

Steps to Reproduce:
1. install rhel4b1 to Smart Array storage (cciss/c0d0)
2. see the panic

Additional info:

Comment 1 Tim Chambers 2004-09-30 00:04:11 UTC
I tried installing again, only changing the distro to rhel3u3 GM --
same box (named cadillac), etc. Installation (via HTTP, FWIW) went
fine. Reboot went fine. Got a shell prompt.

Comment 2 Bill Nottingham 2004-09-30 03:56:58 UTC
Did you use LVM at all?

Comment 3 Tim Chambers 2004-09-30 16:53:34 UTC
No. HP firmware says there's one hardware Raid5 drive on the Smart
Array. I used autopartition during the install, which doesn't do LVM
by default. Unless I missed something and rhel4 changed the default.
Did it?

I repeat -- the ONLY thing I changed to get the installation to work
was to use rhel3u3 GM instead of rhel4b1.


Comment 4 Glen A. Foster 2004-10-27 19:52:32 UTC
Ping.  Is this going to be fixed in RHEL3u4?

Comment 5 Tom Coughlan 2004-10-28 21:33:11 UTC
> Is this going to be fixed in RHEL3u4?

The problem report says that RHEL 4 B1 fails, and RHEL 3 U3 works. No
mention of RHEL 3 U4. There is no reason to expect a problem with U4
either, but I would encourage you to test U4 beta and open a different
BZ if that fails.

If you can test this with RHEL 4 B2 that would be helpful.  You will
see that the installer does use LVM by default.  If you have a
problem, you might try installing without LVM.  Thanks.

Tom


Comment 6 Glen A. Foster 2004-10-28 21:47:24 UTC
I think Tim may have mis-typed -- obviously if it works in RHEL3u3
there's not a strong indication to worry about it being broken in RHEL3u4.

The REAL issue is that RHEL4-beta1 panics with SmartArray.  I
understand that LVM is the default for RHEL4 installs -- can you
please tell me/us how to turn it _off_ other than removing all LVM
partitions?

Comment 7 Tim Chambers 2004-10-29 17:17:08 UTC
I cleared up the "mis-typed" thing with Glen offline. He and I agree
that we aren't worried about this being broken in rhel3u4. I will test
this with rhel4b2 both with and without LVM. Ok to leave as NEEDINFO
until I report back.

Comment 8 Glen A. Foster 2004-11-11 22:57:59 UTC
I have a Longs Peak *workstation* (e.g, zx6000) that's the same core
chipsets, etc.  I have not seen a panic or an MCA with/on beta-2 with
this hardware.

Comment 9 Tom Coughlan 2004-12-22 14:32:06 UTC
There has been no update indicating that this problem still exists. 
Okay to close?

Comment 10 Jay Turner 2005-01-14 14:42:52 UTC
Closing.


Note You need to log in before you can comment on or make changes to this bug.