Bug 40586 - Crashes when using software RAID on alpha
Crashes when using software RAID on alpha
Status: CLOSED DUPLICATE of bug 38791
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
7.3
alpha Linux
medium Severity medium
: ---
: ---
Assigned To: Phil Copeland
Brock Organ
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2001-05-14 17:00 EDT by Chris Adams
Modified: 2007-04-18 12:33 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2001-05-30 17:51:11 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
ksymoops output (3.46 KB, text/plain)
2001-05-14 17:01 EDT, Chris Adams
no flags Details
Oops with RAID1 with kernel 2.4.3-7 (2.51 KB, text/plain)
2001-05-25 14:11 EDT, Chris Adams
no flags Details

  None (edit)
Description Chris Adams 2001-05-14 17:00:40 EDT
I am trying to use Wolverine on an AlphaServer 1000A 5/300 with 512MB RAM
and two RZ29B 4G SCSI drives.

If I attempt to install with software RAID partitions, I always get a
kernel panic (see bug #38790).  I've tried several non-RAID installs, and
they work fine (and continue working just fine after install).  When I try
to convert partitions to software RAID after install, I get crashes.

I finally captured an Oops from one.  I ran it through ksymoops, but it
didn't appear happy with all of it.  It is from the stock Red Hat kernel
that was installed from the CD.  I'll attach the ksymoops output (it has
all the original lines with some extra warnings) to this bug.
Comment 1 Chris Adams 2001-05-14 17:01:22 EDT
Created attachment 18331 [details]
ksymoops output
Comment 2 Chris Adams 2001-05-25 14:11:12 EDT
I am still seeing this with RAID1 on RC1.  I'm attaching another decoded oops
(this one shows that the kernel was in raid1_read_balance when the oops happened).
Comment 3 Chris Adams 2001-05-25 14:11:57 EDT
Created attachment 19632 [details]
Oops with RAID1 with kernel 2.4.3-7
Comment 4 Brock Organ 2001-05-30 16:03:56 EDT
I am not seeing this behavior with our AS1000 test machine, and so we haven't 
yet reproduced your behavior ... :(

what SRM level are you running?
Comment 5 Chris Adams 2001-05-30 16:26:33 EDT
I'm running SRM V5.7-80 Mar 27 2000 09:43:35.

I'm not sure why this was changed to anaconda instead of the kernel.  I
have this trouble after install as well (that's where the oops came
from).  With RC1, I installed without RAID and then remade my /usr/local
partition with RAID1.  That is where the second oops came from.
Comment 6 Brock Organ 2001-05-30 16:49:09 EDT
I've changed it back to kernel component, Part of your issue above is that you 
get an oops trying to install with RAID devices, so I filed under anaconda ...

Here are my machine details: AS1000 5/400 - Alpha ev56 mikasa 400 Mhz; 
21164A-1 400 Mhz w/128 Mb; SRM console: v5.4-104; ARC console: v5.68

to be pedantic in reproducing your problem would require taking the SRM level 
differences into account ... we have seen SRM levels have similar effects in 
previous releases ... :(
Comment 7 Chris Adams 2001-05-30 17:51:05 EDT
Mine is an AlphaServer 1000A 5/300 EV5 Noritake (actually
Noritake-Primo) 300MHz w/512MB RAM.

One thing that I thought about trying (since you suggest that firmware
can matter) is to compile a kernel for Noritake-Primo instead of
generic.  Do you think that would change anything?

One other weird thing I've noticed is that "shutdown -r now" does not
end up rebooting.  It gets back to the firmware, and then I get
(repeatedly):

(boot dka0.0.0.2000.0 -flags 0)
failed to open dka0.0.0.2000.0

until I hit ^C.  If I then type "boot dka0.0.0.2000.0 -flags 0" it
restarts and boots okay.

I was going to try different firmware, but I'm running the latest
version for this platform.

I'm open to any suggestions or ways to nail down what is happening.
Comment 8 Phil Copeland 2001-06-04 16:10:15 EDT

*** This bug has been marked as a duplicate of 38791 ***

Note You need to log in before you can comment on or make changes to this bug.