Bug 467587

Summary: RAID5 arrays getting set active in a degraded state
Product: [Fedora] Fedora Reporter: Frank Arnold <frank.arnold>
Component: mdadmAssignee: Doug Ledford <dledford>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: medium Docs Contact:
Priority: medium    
Version: rawhideCC: dledford
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-10-24 18:09:13 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Output of dmesg none

Description Frank Arnold 2008-10-18 23:39:46 UTC
Created attachment 320773 [details]
Output of dmesg

Description of problem:
Following setup:
3 identical disks with 6 partitions per disk, all to be used in RAID5.
This setup works, but some of the arrays are getting set active before all members are bound. In a later stage the missing members are getting added. This triggers a resync on each reboot. This does only happen with some of them.


Version-Release number of selected component (if applicable):
Seen with a clean Fedora 10 Beta installation. Persits after update to latest Rawhide.


Output of dmesg is attached.

Comment 1 Doug Ledford 2008-10-24 18:09:13 UTC

*** This bug has been marked as a duplicate of bug 453314 ***