Description of problem:
A system has 4 drives(sda,sdb,sdc,sdd). A RAID1 array is created using MDADM with partitions sdb1 and sdc1. I/O is started on the Array. The Array is Degraded during I/O. When partition sdd1 is added as hotspare, it is throwing error "/dev/md1 has failed so using --add cannot work and might destroy".
This issue is already fixed. Fix details:- https://github.com/neilbrown/mdadm/commit/d180d2aa2a1770af1ab8520d6362ba331400512f
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create md array by "mdadm -C /dev/md1 --metadata=1.2 -l1 -n2 /dev/sdb1 /dev/sdc1".
2. Wait until resync is completed.
3. Mount the MD Array.
4. Run I/O on the MD Array.
5. Degrade the Array by pulling out the sdb Drive.
6. Add sdd1 as hotspare by "mdadm --manage /dev/md1 --add /dev/sdd1".
Throws the below error
"mdadm: /dev/md1 has failed so using --add cannot work and might destroy
mdadm: data on /dev/sdd1. You should stop the array and re-assemble it"
The drive should be added as hotspare successfully.
Kernel Version: 3.10.0-327.el7.x86_64
I plan to update to mdadm-3.3.4 for 7.3, which will include this fix.
This was resolved via bz#1273351 updating to mdadm-3.4
Can you provide access to bz#1273351
(In reply to Nanda Kishore Chinnaram from comment #5)
> Hi Jes,
> Can you provide access to bz#1273351
I cannot add you myself, but I have requested if you can have access to it.
Verified the issue in RHEL 7.3 Alpha1 Build. It's resolved.
Pass regression test with , patch from comment 1 exist on mdadm-3.4-10.el7.
change to VERIFIED.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.