This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1300579 - Unable to assign hot spare while running IO on Degraded MD Array
Unable to assign hot spare while running IO on Degraded MD Array
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: mdadm (Show other bugs)
7.2
x86_64 Linux
unspecified Severity urgent
: rc
: 7.3
Assigned To: Jes Sorensen
Zhang Yi
Milan Navratil
dell_server dell_mustfix_7.3
:
Depends On: 1273351
Blocks: 1313485 1274397 1304407
  Show dependency treegraph
 
Reported: 2016-01-21 03:27 EST by Nanda Kishore Chinnaram
Modified: 2016-11-03 20:08 EDT (History)
12 users (show)

See Also:
Fixed In Version: mdadm-3.4-2.el7
Doc Type: Bug Fix
Doc Text:
Using *mdadm* to assign a hot spare to a degraded array while running I/O operations no longer fails Previously, assigning a hot spare to a degraded array while running I/O operations on the MD Array could fail, and the *mdadm* utility returned error messages such as: mdadm: /dev/md1 has failed so using --add cannot work and might destroy mdadm: data on /dev/sdd1. You should stop the array and re-assemble it A patch has been applied to fix this bug, and adding a hot spare to a degraded array now completes as expected in the described situation.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-11-03 20:08:03 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Nanda Kishore Chinnaram 2016-01-21 03:27:27 EST
Description of problem:
A system has 4 drives(sda,sdb,sdc,sdd). A RAID1 array is created using MDADM with partitions sdb1 and sdc1. I/O is started on the Array. The Array is Degraded during I/O. When partition sdd1 is added as hotspare, it is throwing error "/dev/md1 has failed so using --add cannot work and might destroy".

This issue is already fixed. Fix details:- https://github.com/neilbrown/mdadm/commit/d180d2aa2a1770af1ab8520d6362ba331400512f

Version-Release number of selected component (if applicable):
mdadm-3.3.2-7.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create md array by "mdadm -C /dev/md1 --metadata=1.2 -l1 -n2 /dev/sdb1 /dev/sdc1".
2. Wait until resync is completed.
3. Mount the MD Array.
4. Run I/O on the MD Array.
5. Degrade the Array by pulling out the sdb Drive.
6. Add sdd1 as hotspare by "mdadm --manage /dev/md1 --add /dev/sdd1".

Actual results:
Throws the below error  
"mdadm: /dev/md1 has failed so using --add cannot work and might destroy
 mdadm: data on /dev/sdd1. You should stop the array and re-assemble it"

Expected results:
The drive should be added as hotspare successfully.

Additional info:
Kernel Version: 3.10.0-327.el7.x86_64
Comment 2 Jes Sorensen 2016-01-22 13:13:46 EST
I plan to update to mdadm-3.3.4 for 7.3, which will include this fix.
Comment 3 Jes Sorensen 2016-06-09 11:21:58 EDT
This was resolved via bz#1273351 updating to mdadm-3.4
Comment 5 Nanda Kishore Chinnaram 2016-07-04 12:12:32 EDT
Hi Jes, 
Can you provide access to bz#1273351
Comment 6 Jes Sorensen 2016-07-20 07:29:10 EDT
(In reply to Nanda Kishore Chinnaram from comment #5)
> Hi Jes, 
> Can you provide access to bz#1273351

Nanda,

I cannot add you myself, but I have requested if you can have access to it.

Cheers,
Jes
Comment 7 Nanda Kishore Chinnaram 2016-08-09 18:14:26 EDT
Verified the issue in RHEL 7.3 Alpha1 Build. It's resolved.
Comment 8 Zhang Yi 2016-08-17 05:14:10 EDT
Pass regression test with [1], patch from comment 1 exist on mdadm-3.4-10.el7.
change to VERIFIED.

[1]
kernel-3.10.0-489.el7.x86_64.rpm 
mdadm-3.4-9.el7.x86_64.rpm 

Thanks
Yi
Comment 10 errata-xmlrpc 2016-11-03 20:08:03 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2182.html

Note You need to log in before you can comment on or make changes to this bug.