Red Hat Bugzilla – Bug 656067
md: array md127 already has disks
Last modified: 2011-07-14 19:51:13 EDT
Description of problem:
I added two new disks to my F14 x_64 system. Tried this two ways, both from the graphical disk utility and using the mdadm--create command.
After creating the new array and waiting for it to sync, reboot. After the system comes up, the new array is degraded with one drive missing from the array.
A search of /var/log/messages shows the errors:
md: array md127 already has disks!
md/raid1:md127: active with 1 out of two mirrors.
if you resync and reboot, the problem persists.
Version-Release number of selected component (if applicable):
Every time you restart.
Steps to Reproduce:
0a mdadm --zero-superblock /dev/sdd1 (to ensure no remnmants)
0b bmadm --zero-superblock /dev/sde1 ...
1. mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdd1 /dev/sde1
2. wait for sync...
uhm... a non-degraded array would be just the sweetest thing.
As an aside if you look at the file /dev/md/md-device-map the uuid for the array in question doesn't match the uuid returned by mdadm --detail /dev/md0 (or 127 as the case may be)
I edited this file to put the correct uuid in it and rebooted... same issue and yes the file contents are wrong again. Don't know where it might be getting that uuid from.
Now at this point i haven't edited the /etc/mdadm.conf to put the unit in there. But even if it is there the same issue exists.
The /dev/md/md-device-map file is generated at boot time, so editing it does no good. When you create a new array, it gets a new uuid, so the uuid in mdadm.conf is definitely no longer valid.
Can you try the mdadm package in updates-testing and see if it solves your particular problem?
Found and installed mdadm-3.1.3-0.git20100804.2.fc14.x86_64.rpm from koji... testing continues...
Well, 5 reboots later, the mirror has not broken. Rebuilding the array to use the full TB drives (was previously only using 100GB as it takes a whack of time to resync a TB of data).
So far so good.
I'll update this post in a day or so if it remains stable across a few reboots.
Hi, the version from the updates-testing seems to have fixed the issue. Thanks alot.