RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1275354 - OS Installation fail on VD Created from software RAID mdadm
Summary: OS Installation fail on VD Created from software RAID mdadm
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: anaconda
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: David Lehman
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-26 15:58 UTC by Lakshmi_Narayanan_Du
Modified: 2016-01-11 20:33 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-11 20:33:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
unknown error screenshot (58.83 KB, image/jpeg)
2015-10-26 15:58 UTC, Lakshmi_Narayanan_Du
no flags Details
anaconda log (22.47 KB, text/plain)
2015-10-26 16:03 UTC, Lakshmi_Narayanan_Du
no flags Details

Description Lakshmi_Narayanan_Du 2015-10-26 15:58:12 UTC
Created attachment 1086547 [details]
unknown error screenshot

Description of problem:
 OS Installation fail on VD Created from mdadm software RAID 

Version-Release number of selected component (if applicable):
Sanpshot 4

How reproducible:
Always

Steps to Reproduce:
1. Start the anaconda installer

2 Create RAID1 full VD by issuing the command CLI  mdadam -C -l1 -n2 --metadata=1.2 /dev/md126 /dev/sda /dev/sdb

2. Once the VD for Installation is Created and Synchronized get back to the Installation Options Page which was previously presented by pressing Ctrl+Alt+F7

3.From the Media Selection Option for the Storage, select the md126 MDVD and proceed with the OS Installation

4.As Soon as the MDVD is selected Abortion of Installation can happen at any moment which is at Inconsistent time frame throwing 
An Unknown Error has Occured

Actual results:
As Soon as the MDVD is selected Abortion of Installation can happen at any moment which is at Inconsistent time frame throwing 
An Unknown Error has Occured

Expected results:
No error and the installation should proceed on the md vd created 

Additional info:
Attaching anaconda.log and screen shot

Comment 1 Lakshmi_Narayanan_Du 2015-10-26 16:03:01 UTC
Created attachment 1086548 [details]
anaconda log

anaconda log

Comment 3 Lakshmi_Narayanan_Du 2015-10-26 16:07:33 UTC

Seems "/dev/md126" created using cli command  is stored wrongly as  "/dev/md/126 by the installer . At install select installer searches "/dev/md/126" and throws An exception " IOException: Could not stat device /dev/md/126 - No such file or directory
as shown in the below exception trace

------------------------------------------------------------------------
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 872, in clearPartitions
    self.initializeDisk(disk)

  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 911, in initializeDisk
    labelType = _platform.bestDiskLabelType(disk)

  File "/usr/lib/python2.7/site-packages/blivet/platform.py", line 128, in bestDiskLabelType
    parted_device = parted.Device(path=device.path)

  File "/usr/lib64/python2.7/site-packages/parted/decorators.py", line 41, in new
    ret = fn(*args, **kwds)

  File "/usr/lib64/python2.7/site-packages/parted/device.py", line 54, in __init__
    self.__device = _ped.device_get(path)

IOException: Could not stat device /dev/md/126 - No such file or directory.
---------------------------------------------------------------------------

Comment 4 Brian Lane 2015-10-27 00:13:35 UTC
Did you rescan the disks after making your changes?

Please attach all of the logs from /tmp/*log as individual text/plain attachments.

Comment 5 David Lehman 2015-10-27 13:40:21 UTC
You have to give the new array/VD a name, like this:

  mdadm -C -l1 -n2 --metadata=1.2 /dev/md/vdisk1 /dev/sda /dev/sdb

And you may need to either reboot or tell the installer to rescan storage after creating the array.

Comment 6 Lakshmi_Narayanan_Du 2015-11-04 11:25:54 UTC
(In reply to David Lehman from comment #5)
> You have to give the new array/VD a name, like this:
> 
>   mdadm -C -l1 -n2 --metadata=1.2 /dev/md/vdisk1 /dev/sda /dev/sdb
> 
> And you may need to either reboot or tell the installer to rescan storage
> after creating the array.


David ,

 We generally use "/dev/md127" and not "/dev/md/127" while creating 

When I quickly searched I found the same in the below link
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s1-s390info-raid.html

Please let me know is this because of any change in design . In that case
is it documented ?

Comment 7 David Lehman 2016-01-04 21:51:18 UTC
The v1.2 metadata format supports persistent names. This format most probably did not exist when the document you referenced was written (2010 or so), but it is now the default for new arrays. Think of the equivalent situation for device-mapper and lvm: would you choose you create a new logical volume with a name like 'dm-3' when you could instead use 'root' or 'home'? This change in the installer was not documented because it is only relevant in very rare cases like yours. If you simply reboot after creating the array you will not see this issue, which is due to an odd inconsistency in the mdadm tool (see bug 1083641).

Comment 8 Charles Rose (Dell) 2016-01-05 17:51:55 UTC
(In reply to David Lehman from comment #7)
> The v1.2 metadata format supports persistent names. This format most
> probably did not exist when the document you referenced was written (2010 or
> so), but it is now the default for new arrays. Think of the equivalent
> situation for device-mapper and lvm: would you choose you create a new
> logical volume with a name like 'dm-3' when you could instead use 'root' or
> 'home'? This change in the installer was not documented because it is only
> relevant in very rare cases like yours. If you simply reboot after creating
> the array you will not see this issue, which is due to an odd inconsistency
> in the mdadm tool (see bug 1083641).

David,
We can close this bug.

Comment 9 Chris Lumens 2016-01-11 20:33:24 UTC
Closing per comment #8.  Feel free to change the exact resolution if you don't feel like it reflects what actually happened here.  Thanks!


Note You need to log in before you can comment on or make changes to this bug.