Description of problem:
During examining storage devices message:
"Disks sda,sdb,sdc,sdd,sde,sdf contains BIOS RAID metadata, but are not part of any recognized BIOS RAID sets. Ignoring disks sda,sdb,sdc,sdd,sde,sdf."
is displayed and IMSM RAID volume is not assembled.
Version-Release number of selected component (if applicable):
RHEL6.0 Snapshot9 x64 - Manual installation
Always on all RAID levels (1/10/5/0)
Steps to Reproduce:
1. Delete any RAID volumes in Intel OROM
2. Create RAID volume (level5 on 6 drives)
3. Start manual installation using DVD drive
4. Select "Advanced storage devices" and on next screen select RAID volume as installation target.
5. During examinig storage devices warning message is displayed.
6. Click OK and go on with installation
7. Anaconda crashes during partitioning as RAID volume device was not assembled.
(anaconda log is attached)
- Tested on two different PCs (with different drives sets)
- # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear]
md0 : inactive sdc(S)
2257 blocks super external:imsm
unused devices: <none>
- # mdadm -Es
ARRAY metadata=imsm UUID=b13a148d:e3b409d3:5c30cffa:0d15f47f
ARRAY /dev/md/r5 container=b13a148d:e3b409d3:5c30cffa:0d15f47f member=0 UUID=d6eaaa7a:d56d158f:2dee6f25:6fb127a0
- dmraid -s
*** Group superset isw_cfjdggjeeh
name : isw_cfjdggjeeh_r5
size : 499974400
stride : 128
type : raid5_la
status : ok
devs : 6
spares : 0
Created attachment 437547 [details]
Created attachment 437548 [details]
Created attachment 437549 [details]
metadata on drives
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.
** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **
Created attachment 437663 [details]
Restore "--no-degraded" as a deprecated option
mdadm 3.1.3 introduces 'container_enough' functionality which makes external-metadata incremental assembly behave like native-metadata incremental assembly, i.e. the array will start automatically when all expected devices are present, or will start when -R (--run) is specified to force assembly to proceed with the current set of devices.
The --no-degraded option is no longer needed, but it should have been marked deprecated rather than removed outright.
I've checked attached patch with local ITP respin of RH ISO and the issue still exists.
Created attachment 437854 [details]
Created attachment 437856 [details]
Created attachment 437857 [details]
metadata on drives
(In reply to comment #6)
> Created an attachment (id=437663) [details]
> Restore "--no-degraded" as a deprecated option
> mdadm 3.1.3 introduces 'container_enough' functionality which makes
> external-metadata incremental assembly behave like native-metadata incremental
> assembly, i.e. the array will start automatically when all expected devices are
> present, or will start when -R (--run) is specified to force assembly to
> proceed with the current set of devices.
> The --no-degraded option is no longer needed, but it should have been marked
> deprecated rather than removed outright.
Thanks for the patch, and I indeed believe this is the underlying cause of
this bug. Notice though that anaconda has stopped using -I --no-degraded since anaconda-13.21.67-1 / Snapshot 10, see bug 620359. So I believe this bug can be marked as a duplicate of 620359.
Created attachment 437932 [details]
Return success in the 'container not enough' case
Commit 97b4d0e9 "Incremental: honor an 'enough' flag from external
handlers" introduced a regression in that it changed the error return
code for successful invocations.
So both patches in this bug are needed, one to remember the "--no-degraded" was once acceptable and this one to not return an error when the container is not assembled.
Sanity tests done.
Basic examples executed (creating and failing of disk)
Not reproducible on RHEL6.0 Snapshot 10 x86_64.
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.