RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1652058 - Use mdadm to create raid1 with two disks and partion it,then remove one disk from the raid.After this system installation will fail.
Summary: Use mdadm to create raid1 with two disks and partion it,then remove one disk ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: python-blivet
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Blivet Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-21 13:50 UTC by Zhou Yihang
Modified: 2021-09-03 14:09 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-15 07:31:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Exception report (117.93 KB, image/png)
2018-11-21 13:50 UTC, Zhou Yihang
no flags Details

Description Zhou Yihang 2018-11-21 13:50:02 UTC
Created attachment 1507698 [details]
Exception report

Description of problem:
I used mdadm to create a raid1 with two disks and partion it,then i removed one disk from the raid.After this, i try to install system but it will report exception.It is interesting that if i remove another one disk and remain the oringinal one,then system installation will success.
Information detail:
anaconda 21.48.22.134-1 exception report
Traceback (most recent call first):
  File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 378, in name_from_md_node
    raise MDRaidError("name_from_md_node(%s) failed" % node)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 926, in addUdevPartitionDevice
    name = mdraid.name_from_md_node(name)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1240, in addUdevDevice
    device = self.addUdevPartitionDevice(info)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2306, in _populate
    self.addUdevDevice(dev)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2239, in populate
    self._populate()
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 495, in reset
    self.devicetree.populate(cleanupOnly=cleanupOnly)
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 190, in storageInitialize
    storage.reset()
  File "/usr/lib64/python2.7/threading.py", line 765, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
    threading.Thread.run(self, *args, **kwargs)
MDRaidError: name_from_md_node(md127p1) failed

Local variables in innermost frame:
node: md127p1
name: None
md_dir: /dev/md


Version-Release number of selected component (if applicable):
python-blivet-0.61.15.69-1.el7.noarch.rpm

How reproducible:


Steps to Reproduce:
1.A normal system with two vacant disks like sdb,sdc. 
2.mdadm -C /dev/md0 -l 1 -n 2 /dev/sdb /dev/sdc
3.Excute "fdisk /dev/md0" to create a partion on /dev/md0.
4.mdadm -f /dev/md0 /dev/sdc;mdadm -r /dev/md0 /dev/sdc
5.Shutdown and install system.
Actual results:
It will report exception.

Expected results:
Installation success.

Additional info:
If remove sdc won't reproduce the problem,then try to redo the"Steps to Reproduce" but remove sdb instead of sdc.
I also tried to install Fedora28 aside from rehat7.5.There was no problem with Fedora28.
During the installation of Fedora28 I switched to commanline and found a mdadm configure file--mdadm.conf under directory /etc/,but I didn't find it in redhat7.5.
So I added the the same mdadm.conf to squashfs.img of redhat7.5 and recreate iso.The was not exception report any more when I installed system with new iso.

Comment 4 RHEL Program Management 2021-03-15 07:31:43 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.