Bug 168317 - default mdadm command in rc.sysinit does not init RADI devices
default mdadm command in rc.sysinit does not init RADI devices
Status: CLOSED CURRENTRELEASE
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
4
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Doug Ledford
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-09-14 17:18 EDT by Rod Scott
Modified: 2007-11-30 17:11 EST (History)
0 users

See Also:
Fixed In Version: fc6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-07-03 11:21:07 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Rod Scott 2005-09-14 17:18:24 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.10) Gecko/20050716 Firefox/1.0.6

Description of problem:
This originally started as an upgrade issue (FC3 -> FC4) where my RAID config was not being recognized. I gave up on getting it to migrate and ended up re-creating the RAID 5 device (/dev/md1) from scratch. Even after recreating it, it was never recognized on boot up. Once the system did boot up, I could only ever get the device activated by using the command "mdadm -A /dev/md1". I also created a mdadm.conf file listing the devices and the array information, but could never get the "mdadm -A -s" format of the command to work. Because this is the format of the command specified in the /etc/rc.sysinit file, I modified /etc/rc.sysinit to use the "mdadm -A /dev/md1" format instead, and everything works fine.

Version-Release number of selected component (if applicable):
mdadm-1.11.0-4.fc4

How reproducible:
Always

Steps to Reproduce:
1. create RAID device (e.g., /dev/md1) but don't add entry to /etc/fstab
2. create /etc/mdadm.conf file as per man pages 
3. reboot system and run 'mdadm -D /dev/md1'
4. run 'mdadm -A -s' and 'mdadm -D /dev/md1' again
5. run 'mdadm -A /dev/md1' and 'mdadm -D /dev/md1' again
  

Actual Results:  after step 3 above, the command indicated that no RAID devices were active/running
after step 4, the same thing happens - no RAID devices
after step 5 the RAID device is activated and the detailed status information is provided

Expected Results:  'mdadm -A -s' should have recognized the /etc/mdadm.conf file and started the RAID device on boot up. 

Additional info:

After I got everything back up and running and started my data restore, I checked my old /etc/mdadm.conf and /etc/rc.sysinit files and they were identical to the defaults (mdadm.conf - same devices, etc; rc.sysinit had not been modified and still used 'mdadm -A -s').
Comment 1 Doug Ledford 2005-10-31 15:28:20 EST
Please post the contents of your mdadm.conf file and the output of /proc/mdstat
when the array in question is up and running.
Comment 2 Rod Scott 2005-10-31 17:33:43 EST
(In reply to comment #1)
> Please post the contents of your mdadm.conf file and the output of /proc/mdstat
> when the array in question is up and running.

Here it is along with the output of mdadm -D /dev/md1:

# mdadm -D /dev/md1
/dev/md1:
        Version : 00.90.02
  Creation Time : Wed Sep 14 10:10:25 2005
     Raid Level : raid5
     Array Size : 240129600 (229.01 GiB 245.89 GB)
    Device Size : 80043200 (76.34 GiB 81.96 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Mon Oct 31 17:29:14 2005
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 0d52afed:19610c0a:ac9a96b9:913257e9
         Events : 0.301475

    Number   Major   Minor   RaidDevice State
       0      33        0        0      active sync   /dev/hde
       1      34        0        1      active sync   /dev/hdg
       2      56        0        2      active sync   /dev/hdi
       3      57        0        3      active sync   /dev/hdk

	   
# cat /proc/mdstat
Personalities : [raid5] 
md1 : active raid5 hde[0] hdk[3] hdi[2] hdg[1]
      240129600 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>


# cat /etc/mdadm.conf
DEVICE /dev/hde /dev/hdg /dev/hdi /dev/hdk

ARRAY /dev/md1 devices=/dev/hde,/dev/hdg,/dev/hdi,/dev/hdk
Comment 3 Christian Iseli 2007-01-19 19:19:36 EST
This report targets the FC3 or FC4 products, which have now been EOL'd.

Could you please check that it still applies to a current Fedora release, and
either update the target product or close it ?

Thanks.
Comment 4 Doug Ledford 2007-07-03 11:21:07 EDT
This bug should no longer be present in modern fedora.  It most likely was an
issue with the mdadm -A -s command also needing the --auto=yes flag to create
the device node before attempting to start the device.  Note: the bug is also
related to using whole disk devices instead of partitions to create the raid
array, if the md1 device had been made out of partitions that were marked for
auto assembly, then the bug would have been completely avoided.

Note You need to log in before you can comment on or make changes to this bug.