RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 615998 - Kickstart installation on IMSM RAID volume fails if an old metadata was present on drives.
Summary: Kickstart installation on IMSM RAID volume fails if an old metadata was prese...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: anaconda
Version: 6.0
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Anaconda Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-19 13:11 UTC by Ignacy Kasperowicz
Modified: 2010-08-20 11:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-07-22 17:57:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Storage log (81.52 KB, text/plain)
2010-07-19 13:11 UTC, Ignacy Kasperowicz
no flags Details
Anaconda log (6.18 KB, text/plain)
2010-07-19 13:11 UTC, Ignacy Kasperowicz
no flags Details
Dmesg output (45.25 KB, text/plain)
2010-07-19 13:11 UTC, Ignacy Kasperowicz
no flags Details
Anaconda log (6.18 KB, text/plain)
2010-07-19 13:12 UTC, Ignacy Kasperowicz
no flags Details
Syslog (68.56 KB, text/plain)
2010-07-19 13:13 UTC, Ignacy Kasperowicz
no flags Details

Description Ignacy Kasperowicz 2010-07-19 13:11:16 UTC
Created attachment 432867 [details]
Storage log

Description of problem:
 - Anaconda show error: "Disk sdb contains BIOS RAID metadata, but is not part of any recognized BIOS RAID sets. Ignoring disk sdb."
 - IMSM RAID volume does not start (only container is assembled but not RAID volume).
 - mdmon is not running (see syslog with crash info)

How reproducible:
Sporadically.

Steps to Reproduce:
1. Clear all metadata on drives using command: mdadm --zero-superblock /dev/sd[a-f]
2. Create RAID1 in Intel OROM on 2 HDDs
3. Start kickstart installation using config file like:

install
nfs --server=172.28.58.30 --dir=/pxe/auto-installation
key --skip
lang en_US.UTF-8
keyboard us
rootpw donotchange
firewall --enable --ssh
authconfig --enableshadow --enablemd5
selinux --enforcing
timezone --utc America/New_York
zerombr no
ignoredisk --drives=sda
bootloader --location=boot --driveorder=md127 --append="pci=nommconf rhgb quiet"
clearpart --initlabel
part pv.039936 --size=1 --grow --ondisk=md127
volgroup lvmvolgroup0 --pesize=32768 pv.039936
logvol / --fstype ext3 --size=37274 --name=lvm_root --vgname=lvmvolgroup0
logvol swap --fstype swap --size=2662 --name=lvm_swap --vgname=lvmvolgroup0
part /boot --fstype ext2 --size=2662 --ondisk=md127
reboot
%packages
@core
@base
emacs
%post
#!/bin/sh
echo 1279539657 > /tmp/os_stamp

4. Installation passes and OS is bootable
5. Delete RAID in OROM and create it again (the same volume as before)
6. Start installation again with the same config file and installation process will be interrupted.
In shell"
# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear] 
md0 : inactive sdb[0](S)
      2257 blocks super external:imsm
       
unused devices: <none>
  
Actual results:
RAID volume is not assembled, mdmon crashes

Expected results:
RAID volume should be assembled properly during installation

Additional info:
SW: RHEL 6.0 Snapshot 7 x64
HW: DQ35JOE - ICH9 chipset

Workaround:
Manually delete metadata from drives using mdadm --zero-superblock command.

Comment 1 Ignacy Kasperowicz 2010-07-19 13:11:36 UTC
Created attachment 432868 [details]
Anaconda log

Comment 2 Ignacy Kasperowicz 2010-07-19 13:11:54 UTC
Created attachment 432869 [details]
Dmesg output

Comment 3 Ignacy Kasperowicz 2010-07-19 13:12:16 UTC
Created attachment 432870 [details]
Anaconda log

Comment 4 Ignacy Kasperowicz 2010-07-19 13:13:03 UTC
Created attachment 432871 [details]
Syslog

Comment 6 RHEL Program Management 2010-07-19 13:37:34 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 7 Marcin Labun 2010-07-21 14:58:40 UTC
Hi,
when I created IMSM (Intel) RAID 1, the installer found BIOS RAID 0MB size, ddf1_raidddf. Can you point me more closely to the place in anaconda code that that is doing RAID discovery. 
thanks,
Marcin

Comment 8 Marcin Labun 2010-07-22 14:42:30 UTC
anaconda does not activates degraded array, therefore in case when some array member is missing the array is not activated, and the system is not installed.
In the this particular reproduction method, it could happen that /dev/sda was part of array, and kickstart instructed anaconda to ignore the RAID member disk (*ignoredisk --drives=sda*).
In general, all disks must be present in system, and it is suggested that IMSM metadata is the only one on the disks that are part of RAID volume on which the system is going to be installed.
Lowering priority of the bug.

Comment 9 Doug Ledford 2010-07-22 17:42:04 UTC
In general, even if we are going to support installing on existing raid devices that are degraded, it's going to be up to anaconda to force their degraded assembly.  As such, I'm changing the component to anaconda on this bug as I don't think mdadm has anything it needs to do.

Comment 10 David Cantrell 2010-07-22 17:57:08 UTC
anaconda has never supported installation to degraded arrays.  It's not something we intend to support.

Comment 11 Marcin Labun 2010-07-23 09:08:03 UTC
(In reply to comment #10)
> anaconda has never supported installation to degraded arrays.  It's not
> something we intend to support.    
I think that the information that user receives in the situation of not activating/using degraded array is not clear; maybe anaconda could display more specific information like: list of disk that are taken into account when trying activate and the information that RAID device is not going to be activated because it is in degraded or failed state.
Is it possible to display the name of BIOS raid: like DDF or IMSM, etc.

Current information: "Disk sdb contains BIOS RAID metadata, but is not part
of any recognized BIOS RAID sets. Ignoring disk sdb."
thanks,
Marcin Labun

Comment 12 jbielans 2010-08-20 11:11:40 UTC
Not reproducible on RHEL6.0 Snapshot 10 x86_64.


Note You need to log in before you can comment on or make changes to this bug.