Bug 238353

Summary: Software raid arrays degraded after rawhide install
Product: [Fedora] Fedora Reporter: Bruno Wolff III <bruno>
Component: anacondaAssignee: Peter Jones <pjones>
Status: CLOSED RAWHIDE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: rawhide   
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2007-05-21 18:05:32 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 150226    

Description Bruno Wolff III 2007-04-29 19:05:30 UTC
Description of problem:
After doing a fresh install on to existing software (md) raid arrays and
rebooting, the arrays are all degraded and only have one member listed.
Once I fixed them up with mdadm they stayed that way accross reboots.

Version-Release number of selected component (if applicable):
Rawhide from April 27 and test4 both showed this problem, but I think it goes
back a ways and I didn't notice until recently.

How reproducible:
This seems to be happening all the time, but my sample size isn't that big.

Steps to Reproduce:
1. Do a fresh install on existing raid arrays.
2. Check the status of the arrays after rebooting.
3.
  
Actual results:
Raid arrays degraded

Expected results:
Raid arrays functioning with the setup defined during the install.

Additional info:
I didn't test this with arrays assembled during the install and don't know if
that would make a difference or not.

Comment 1 Jeremy Katz 2007-05-21 18:05:32 UTC
I just did an install with today's rawhide + the changes to use mdadm in
anaconda instead of the old raid bits and can't reproduce this.  I _suspect_
this was due to horkiness with raidautorun that won't get hit now.  Please
reopen if you see with tomorrow's rawhide or later