Bug 886109 - [Intel F17 Bug] Failed disk is still available in volume/container
Summary: [Intel F17 Bug] Failed disk is still available in volume/container
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: 17
Hardware: All
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Jes Sorensen
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-11 14:30 UTC by Maciej Patelczyk
Modified: 2013-01-05 06:52 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1010859 (view as bug list)
Environment:
Last Closed: 2013-01-05 06:52:29 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
udev rules patch (2.11 KB, patch)
2012-12-11 14:30 UTC, Maciej Patelczyk
no flags Details | Diff

Description Maciej Patelczyk 2012-12-11 14:30:44 UTC
Created attachment 661493 [details]
udev rules patch

Description of problem:
When we have a raid volume and one of disks fails, the failed disk is still present in volume and container. The raid volume is in normal state (should be  degraded) and rebuild cannot start. 

How reproducible:
Always

Steps to Reproduce:
mdadm -Ss
mdadm --zero-superblock /dev/sd[b-d]
mdadm -C /dev/md/imsm0 -amd -e imsm -n 3 /dev/sdb /dev/sdc /dev/sdd -R
mdadm -C /dev/md/raid5 -amd -l5 -n 3 /dev/sdb /dev/sdc /dev/sdd -R
mdadm --wait /dev/md/raid5
# power off a raid member disk (e.g. /dev/sdd)

Actual results:
The failed disk is still present in container/volume. State in 'mdadm -D /dev/md/raid5' output is 'clean'.

Expected results:
The failed disk should disappear from container and volume. State in 'mdadm -D /dev/md/raid5' output should be 'clean, degraded'.

Additional info:
When one of disks fails udev runs "/usr/lib/udev/rules.d/65-md-incremental.rules". The "65-md-incremental.rules" script has following rules:
SUBSYSTEM=="block", ACTION=="remove", ENV{ID_FS_TYPE}=="linux_raid_member", \
        RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"
SUBSYSTEM=="block", ACTION=="remove", ENV{ID_FS_TYPE}=="isw_raid_member", \
        RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}"

If a disk fails and "$env{ID_PATH}" is null, udev runs "/sbin/mdadm -If sdd --path" (what does nothing - it is an incorrect mdadm's command), instead of "/sbin/mdadm -If sdd".

The following patch fixes this bug:
correct-65-md-incremental-rules-in-case-a-raid-disk-fails.patch

Comment 1 Fedora Update System 2012-12-11 16:38:54 UTC
mdadm-3.2.6-7.fc17 has been submitted as an update for Fedora 17.
https://admin.fedoraproject.org/updates/mdadm-3.2.6-7.fc17

Comment 2 Fedora Update System 2012-12-11 16:47:37 UTC
mdadm-3.2.6-7.fc16 has been submitted as an update for Fedora 16.
https://admin.fedoraproject.org/updates/mdadm-3.2.6-7.fc16

Comment 3 Fedora Update System 2012-12-12 04:31:58 UTC
Package mdadm-3.2.6-7.fc17:
* should fix your issue,
* was pushed to the Fedora 17 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing mdadm-3.2.6-7.fc17'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2012-20222/mdadm-3.2.6-7.fc17
then log in and leave karma (feedback).

Comment 4 Lukasz Dorau 2012-12-13 10:09:13 UTC
Intel has tested the package mdadm-3.2.6-7.fc17 and confirms the bug is fixed in this build.

Comment 5 Fedora Update System 2013-01-05 06:50:31 UTC
mdadm-3.2.6-7.fc16 has been pushed to the Fedora 16 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 6 Fedora Update System 2013-01-05 06:52:31 UTC
mdadm-3.2.6-7.fc17 has been pushed to the Fedora 17 stable repository.  If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.