Bug 656067 - md: array md127 already has disks
md: array md127 already has disks
Status: CLOSED CURRENTRELEASE
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
14
x86_64 Linux
low Severity high
: ---
: ---
Assigned To: Doug Ledford
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-11-22 19:19 EST by Dave Sharman
Modified: 2011-07-14 19:51 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-07-14 19:51:13 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dave Sharman 2010-11-22 19:19:33 EST
Description of problem:
I added two new disks to my F14 x_64 system.   Tried this two ways, both from the graphical disk utility and using the mdadm--create command.
After creating the new array and waiting for it to sync, reboot.   After the system comes up, the new array is degraded with one drive missing from the array.
A search of /var/log/messages shows the errors:
md: array md127 already has disks!
md/raid1:md127: active with 1 out of two mirrors.

if you resync and reboot, the problem persists.


Version-Release number of selected component (if applicable):


How reproducible:
Every time you restart.

Steps to Reproduce:
0a mdadm --zero-superblock /dev/sdd1 (to ensure no remnmants)
0b bmadm --zero-superblock /dev/sde1 ...
1. mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdd1 /dev/sde1
2. wait for sync...
3. reboot
  
Actual results:
See above.

Expected results:
uhm...   a non-degraded array would be just the sweetest thing.

Additional info:

As an aside if you look at the file /dev/md/md-device-map the uuid for the array in question doesn't match the uuid returned by mdadm --detail /dev/md0 (or 127 as the case may be)
I edited this file to put the correct uuid in it and rebooted... same issue and yes the file contents are wrong again.  Don't know where it might be getting that uuid from.   
Now at this point i haven't edited the /etc/mdadm.conf to put the unit in there.   But even if it is there the same issue exists.

Thanks.

d.
Comment 1 Doug Ledford 2010-11-23 13:12:49 EST
The /dev/md/md-device-map file is generated at boot time, so editing it does no good.  When you create a new array, it gets a new uuid, so the uuid in mdadm.conf is definitely no longer valid.

Can you try the mdadm package in updates-testing and see if it solves your particular problem?
Comment 2 Dave Sharman 2010-11-23 14:34:47 EST
Found and installed mdadm-3.1.3-0.git20100804.2.fc14.x86_64.rpm from koji... testing continues...

Dave
Comment 3 Dave Sharman 2010-11-23 16:27:09 EST
Well, 5 reboots later, the mirror has not broken.   Rebuilding the array to use the full TB drives (was previously only using 100GB as it takes a whack of time to resync a TB of data). 

So far so good.   

I'll update this post in a day or so if it remains stable across a few reboots.

Dave.
Comment 4 Dave Sharman 2010-11-25 16:05:52 EST
Hi, the version from the updates-testing seems to have fixed the issue.  Thanks alot.


d.

Note You need to log in before you can comment on or make changes to this bug.