Bug 225547

Summary: ERROR: asr: Invalid magic number in RAID table; saw 0x
Product: Red Hat Enterprise Linux 4 Reporter: Peter Bieringer <pb>
Component: dmraidAssignee: Heinz Mauelshagen <heinzm>
Status: CLOSED NOTABUG QA Contact: Corey Marthaler <cmarthal>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.0CC: agk, dwysocha, heinzm, mbroz, notting, prockai
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-10-16 12:42:24 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Peter Bieringer 2007-01-31 09:55:10 UTC
Description of problem:
After booting the new kernel, a strange error message occurs caused by LVM,
reason unknown, system is still running well.

Version-Release number of selected component (if applicable):
initscripts-7.93.25.EL-1

How reproducible:
On every reboot

Steps to Reproduce:
1. Setup system with standard Linux software RAID, no LVM configuration
2. Reboot
  
Actual results:
ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4.
ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4.
No RAID disks
Setting up Logical Volume Management: [  OK  ]
ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4.
ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4.
No RAID disks
Setting up Logical Volume Management: [  OK  ]

Expected results:
No such strange messages

Additional info:
Enabling "set -x" in rc.sysinit I found, that following will cause this message:

+ /sbin/dmraid -i -a y
ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4.
ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4.
No RAID disks

Reason for this two messages are found in following piece of code:

if [ -c /dev/mapper/control ]; then
    if [ -f /etc/multipath.conf -a -x /sbin/multipath.static ]; then
        modprobe dm-multipath >/dev/null 2>&1
        /sbin/multipath.static -v 0
        if [ -x /sbin/kpartx ]; then
                /sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a"
        fi
    fi

    if [ -x /sbin/dmraid ]; then
        modprobe dm-mirror > /dev/null 2>&1
        /sbin/dmraid -i -a y   <-!!!!!
    fi
    



if [ -f /etc/mdadm.conf ]; then
        /sbin/mdadm -A -s
        # LVM2 initialization, take 2
        if [ -c /dev/mapper/control ]; then
                if [ -x /sbin/multipath.static ]; then
                        modprobe dm-multipath >/dev/null 2>&1
                        /sbin/multipath.static -v 0
                        if [ -x /sbin/kpartx ]; then
                                /sbin/dmsetup ls --target multipath --exec
"/sbin/kpartx -a"
                        fi
                fi

                if [ -x /sbin/dmraid ]; then
                        modprobe dm-mirror > /dev/null 2>&1
                        /sbin/dmraid -i -a y  <-!!!!!
                fi
   
Note that system has a ATARAID controller built-in, but feature is not used at all:
00:14.0 RAID bus controller: Silicon Image, Inc. Adaptec AAR-1210SA SATA
HostRAID Controller (rev 02)

Removing package "dmraid" will solve this issue.

Comment 1 Dave Wysochanski 2008-03-18 19:12:06 UTC
It appears initscripts is calling dmraid when it should not.
I know it has been a long time but are you still seeing this problem?


Comment 6 Heinz Mauelshagen 2008-10-16 12:42:24 UTC
Closing per comment #5.