Bug 33949 - Root RAID 1 causes obsolete MD ioctl error message
Summary: Root RAID 1 causes obsolete MD ioctl error message
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: kernel
Version: 7.1
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Michael K. Johnson
QA Contact: Brock Organ
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2001-03-29 20:47 UTC by kevin_myer
Modified: 2007-04-18 16:32 UTC (History)
0 users

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2001-03-29 20:47:19 UTC
Embargoed:


Attachments (Terms of Use)

Description kevin_myer 2001-03-29 20:47:12 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.2-0.1.28enterprise i686)


I have been testing Red Hat Wolverine on my Dell PowerEdge 1300 server. 
Long story but I currently have everything running off a single 20Gb IDE
drive transitioning towards three Ultra160 9Gb drives off an Adaptec U2W
controller.  I am moving that all to software RAID as soon as I can get it
booting, which I can't currently.  I created two soft RAID partions,
/dev/md0 and /dev/md1.  /dev/md0 is RAID 1 and /dev/md1 is RAID 5 and they
both are reiserfs partitions.  I used "find <old partition name> -xdev |
cpio -pm <new partition name>" to move things from the IDE drive to the
RAID partitions (which incidentally causes another problems - dmesg shows
messages about running out of memory and the kernel starts killing
processes, like httpd and mysqld but the copy finishes fine...).  I setup
lilo for a bootable RAID configuration and reboot.  I haven't gotten the
boot process to get past the LI prompt but I can boot from floppy disk. 
This works fine, the RAM disk loads, then the kernel image loads.  It
attempts to mount /dev/md0 and then the kernel panics with the following
message:

swapper (pid 1) used obsolete MD ioctl, upgrade your software to use new
ictls <this is verbatim>
kernel panic:  unable to mount device (09:00) on VFS <from my flawed
memory>

The only problem, unless I'm mistaken, is no software, besides kernel
software, is loaded yet.  So what software needs upgraded?  Is there a
newer version of raidtools available?

I've upgrade to kernel 2.4.2-0.1.28enterprise, raidtools-0.90-13,
initscripts-5.78, lilo-21.4.4-13 and reiserfs-utils-3.x.0f-1 from RawHide
after I was having these initial problems with Wolverine but nothing has
made a difference. 


Reproducible: Always
Steps to Reproduce:
1.  Boot from root RAID 1, /dev/md0


	

Actual Results:  As described, the kernel panics.  The error message
appears to be coming from linux/drivers/md/md.c, about line 2884, which
appears to be the default if no other changes are specified for the RAID
superblock.

Expected Results:  Should be able to boot from root RAID.

Actually, one thing I haven't tried upgrading yet is SysVinit - I've just
done that and will reboot with that change after I submit this.  If it
fixes the problem, I'll file a comment to this.

Comment 1 Arjan van de Ven 2001-03-29 21:01:09 UTC
1) Reiserfs doesn't work on software raid and isn't reliable on hardware raid
2) Reiserfs is not supported by Red Hat and is included in the beta to asses
    the quality of reiserfs.

Comment 2 Arjan van de Ven 2001-03-29 21:10:12 UTC
The other thing: We fixed several boot-raid bugs recently and we test this in
our lab 
now; I assume this is fixed.
Reiserfs recently got fixed to at least "work" with raid, although you still
loose reliability.




Note You need to log in before you can comment on or make changes to this bug.