Bug 490959 - incremental assembly of partitioned array using mdadm.conf file broken
incremental assembly of partitioned array using mdadm.conf file broken
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
All Linux
low Severity medium
: ---
: ---
Assigned To: Doug Ledford
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2009-03-18 13:18 EDT by Doug Ledford
Modified: 2009-07-14 11:16 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 481561
Last Closed: 2009-07-14 11:16:24 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Doug Ledford 2009-03-18 13:18:45 EDT
+++ This bug was initially created as a clone of Bug #481561 +++

Created an attachment (id=329975)
patch with fix

Description of problem:

Having /dev/md_d0 in /etc/mdadm.conf, incremental assembly
with mdadm -I --auto=yes fails.
This is specific for components with super-minor 0.
The bug affects array assembly by udev after boot, in which
case device numbers 9, 0 are assigned in place of correct
254, 0.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

1. create linux array with partitions having minor
   number 0 (e.g. use /dev/md_d0 name)
2. craete mdadm.conf record for the array with
   name /dev/md_d0 indicating partitionable array
3. reboot 
Actual results:

Array assembled by udev has major number 9, minor
number 0, device file /dev/md0 -> not partionable 

Expected results:

Array is assembled with file /dev/md_d0, maj.
number 254, min. number 0.

Additional info:

I am adding patch which should fix it.

--- Additional comment from rvykydal@redhat.com on 2009-01-26 06:54:47 EDT ---

Created an attachment (id=329976)
reproducer session without and with patch from description

--- Additional comment from jwilson@redhat.com on 2009-02-24 11:42:12 EDT ---

Hm... Wondering if this is related to something I'm seeing too... F10 install originally, upgraded to rawhide, now gets this on boot initially for its md devices:

# cat /proc/mdstat 
Personalities : [raid1] [raid0] 
md126 : active raid1 sda3[0]
      22531072 blocks [2/1] [U_]
md127 : inactive sda7[0]
      25366528 blocks
md0 : active raid1 sda2[0] sdb1[1]
      200704 blocks [2/2] [UU]
md3 : inactive sdb6[1](S)
      25366528 blocks
md2 : inactive sdb2[1](S)
      22531072 blocks
md1 : active raid1 sda5[0] sdb3[1]
      22531008 blocks [2/2] [UU]
unused devices: <none>

The expected layout is:

# cat /proc/mdstat 
Personalities : [raid1] [raid0] 
md0 : active raid1 sda2[0] sdb1[1]
      200704 blocks [2/2] [UU]
md3 : inactive sda7[0] sdb6[1]
      50733056 blocks
md2 : active raid1 sda3[0] sdb2[1]
      22531072 blocks [2/2] [UU]
md1 : active raid1 sda5[0] sdb3[1]
      22531008 blocks [2/2] [UU]
unused devices: <none>

Sometimes md0 gets screwed up too. All the arrays are properly defined in /etc/mdadm.conf, so far as I can tell. Seems perhaps the arrays are being started in the initrd before all the components have actually been found, then the rest get found by udev or something (I get spew about this just after 'starting udev').

--- Additional comment from dledford@redhat.com on 2009-03-18 13:13:10 EDT ---

Jared, your issue is different.  Your issue is a duplicate of bug 488038.

--- Additional comment from dledford@redhat.com on 2009-03-18 13:16:52 EDT ---

Rydek, this bug is specific to mdadm-2.6.7-1, which is no longer in rawhide.  Instead, mdadm-3.0 is in rawhide, and this no longer applies to that version of mdadm (mdadm 3.0 doesn't even make partitionable arrays any more, instead it uses the kernel's built in support for dealing with partitions on generic block devices to get partitionable arrays instead, so all arrays with mdadm 3.0 are now normal arrays).

However, this problem does still exist in F10/F9, and I can't upgrade those to mdadm-3.0, so I'm moving it to F10 and cloning for F9.
Comment 1 Bug Zapper 2009-06-09 23:37:26 EDT
This message is a reminder that Fedora 9 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 9.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '9'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 9's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 9 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
Comment 2 Bug Zapper 2009-07-14 11:16:24 EDT
Fedora 9 changed to end-of-life (EOL) status on 2009-07-10. Fedora 9 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.