Bug 33949 - Root RAID 1 causes obsolete MD ioctl error message
Root RAID 1 causes obsolete MD ioctl error message
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Michael K. Johnson
Brock Organ
Depends On:
  Show dependency treegraph
Reported: 2001-03-29 15:47 EST by kevin_myer
Modified: 2007-04-18 12:32 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2001-03-29 15:47:19 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description kevin_myer 2001-03-29 15:47:12 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.2-0.1.28enterprise i686)

I have been testing Red Hat Wolverine on my Dell PowerEdge 1300 server. 
Long story but I currently have everything running off a single 20Gb IDE
drive transitioning towards three Ultra160 9Gb drives off an Adaptec U2W
controller.  I am moving that all to software RAID as soon as I can get it
booting, which I can't currently.  I created two soft RAID partions,
/dev/md0 and /dev/md1.  /dev/md0 is RAID 1 and /dev/md1 is RAID 5 and they
both are reiserfs partitions.  I used "find <old partition name> -xdev |
cpio -pm <new partition name>" to move things from the IDE drive to the
RAID partitions (which incidentally causes another problems - dmesg shows
messages about running out of memory and the kernel starts killing
processes, like httpd and mysqld but the copy finishes fine...).  I setup
lilo for a bootable RAID configuration and reboot.  I haven't gotten the
boot process to get past the LI prompt but I can boot from floppy disk. 
This works fine, the RAM disk loads, then the kernel image loads.  It
attempts to mount /dev/md0 and then the kernel panics with the following

swapper (pid 1) used obsolete MD ioctl, upgrade your software to use new
ictls <this is verbatim>
kernel panic:  unable to mount device (09:00) on VFS <from my flawed

The only problem, unless I'm mistaken, is no software, besides kernel
software, is loaded yet.  So what software needs upgraded?  Is there a
newer version of raidtools available?

I've upgrade to kernel 2.4.2-0.1.28enterprise, raidtools-0.90-13,
initscripts-5.78, lilo-21.4.4-13 and reiserfs-utils-3.x.0f-1 from RawHide
after I was having these initial problems with Wolverine but nothing has
made a difference. 

Reproducible: Always
Steps to Reproduce:
1.  Boot from root RAID 1, /dev/md0


Actual Results:  As described, the kernel panics.  The error message
appears to be coming from linux/drivers/md/md.c, about line 2884, which
appears to be the default if no other changes are specified for the RAID

Expected Results:  Should be able to boot from root RAID.

Actually, one thing I haven't tried upgrading yet is SysVinit - I've just
done that and will reboot with that change after I submit this.  If it
fixes the problem, I'll file a comment to this.
Comment 1 Arjan van de Ven 2001-03-29 16:01:09 EST
1) Reiserfs doesn't work on software raid and isn't reliable on hardware raid
2) Reiserfs is not supported by Red Hat and is included in the beta to asses
    the quality of reiserfs.
Comment 2 Arjan van de Ven 2001-03-29 16:10:12 EST
The other thing: We fixed several boot-raid bugs recently and we test this in
our lab 
now; I assume this is fixed.
Reiserfs recently got fixed to at least "work" with raid, although you still
loose reliability.

Note You need to log in before you can comment on or make changes to this bug.