Bug 600604 - mdadm usage bug in /sbin/mkdumprd may cause dumps to be lost
mdadm usage bug in /sbin/mkdumprd may cause dumps to be lost
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kexec-tools (Show other bugs)
6.0
All Linux
low Severity high
: beta
: ---
Assigned To: Cong Wang
Han Pingtian
: Regression
Depends On:
Blocks: 5to6kexecTools
  Show dependency treegraph
 
Reported: 2010-06-05 03:16 EDT by CAI Qian
Modified: 2013-09-29 22:15 EDT (History)
6 users (show)

See Also:
Fixed In Version: kexec-tools-2_0_0-79_el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 456154
Environment:
Last Closed: 2010-11-11 09:46:05 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 4 Han Pingtian 2010-06-23 03:21:47 EDT
Verified with -84.el6. With this faked /etc/mdadm.conf:

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md1 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md2 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md4 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md5 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md6 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md7 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md8 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md9 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md10 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md11 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md12 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md13 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md14 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md15 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md16 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md17 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md18 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md19 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md20 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md21 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md22 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md23 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md24 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md25 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md26 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md27 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md28 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md29 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md30 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md31 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4
ARRAY /dev/md3 level=raid5 num-devices=3 UUID=3ea5cf72:552f6677:231e2956:a010eee4

I got a init of initrd for kdump kernel. With this snip code in the init: 

if [ -f /etc/mdadm.conf ]
then
  for i in `awk '/^ARRAY[[:space:]]/{print $2}' /etc/mdadm.conf`
  do
          MD_MIN=`echo $i | sed -e 's/^[^0-9]*\([0-9]\+\)$/\1/'`
          mknod $i b 9 $MD_MIN
  done
fi

I can get the correct MD_MIN.
Comment 5 releng-rhel@redhat.com 2010-11-11 09:46:05 EST
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.