Bug 467587 - RAID5 arrays getting set active in a degraded state
RAID5 arrays getting set active in a degraded state
Status: CLOSED DUPLICATE of bug 453314
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Doug Ledford
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2008-10-18 19:39 EDT by Frank Arnold
Modified: 2008-10-24 14:09 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-10-24 14:09:13 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Output of dmesg (47.94 KB, text/plain)
2008-10-18 19:39 EDT, Frank Arnold
no flags Details

  None (edit)
Description Frank Arnold 2008-10-18 19:39:46 EDT
Created attachment 320773 [details]
Output of dmesg

Description of problem:
Following setup:
3 identical disks with 6 partitions per disk, all to be used in RAID5.
This setup works, but some of the arrays are getting set active before all members are bound. In a later stage the missing members are getting added. This triggers a resync on each reboot. This does only happen with some of them.

Version-Release number of selected component (if applicable):
Seen with a clean Fedora 10 Beta installation. Persits after update to latest Rawhide.

Output of dmesg is attached.
Comment 1 Doug Ledford 2008-10-24 14:09:13 EDT

*** This bug has been marked as a duplicate of bug 453314 ***

Note You need to log in before you can comment on or make changes to this bug.