Bug 1652058
| Summary: | Use mdadm to create raid1 with two disks and partion it,then remove one disk from the raid.After this system installation will fail. | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Zhou Yihang <zhouyihang1> | ||||
| Component: | python-blivet | Assignee: | Blivet Maintenance Team <blivet-maint-list> | ||||
| Status: | CLOSED WONTFIX | QA Contact: | Release Test Team <release-test-team-automation> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 7.5 | ||||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-03-15 07:31:43 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |
Created attachment 1507698 [details] Exception report Description of problem: I used mdadm to create a raid1 with two disks and partion it,then i removed one disk from the raid.After this, i try to install system but it will report exception.It is interesting that if i remove another one disk and remain the oringinal one,then system installation will success. Information detail: anaconda 21.48.22.134-1 exception report Traceback (most recent call first): File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 378, in name_from_md_node raise MDRaidError("name_from_md_node(%s) failed" % node) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 926, in addUdevPartitionDevice name = mdraid.name_from_md_node(name) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1240, in addUdevDevice device = self.addUdevPartitionDevice(info) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2306, in _populate self.addUdevDevice(dev) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2239, in populate self._populate() File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 495, in reset self.devicetree.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 190, in storageInitialize storage.reset() File "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run threading.Thread.run(self, *args, **kwargs) MDRaidError: name_from_md_node(md127p1) failed Local variables in innermost frame: node: md127p1 name: None md_dir: /dev/md Version-Release number of selected component (if applicable): python-blivet-0.61.15.69-1.el7.noarch.rpm How reproducible: Steps to Reproduce: 1.A normal system with two vacant disks like sdb,sdc. 2.mdadm -C /dev/md0 -l 1 -n 2 /dev/sdb /dev/sdc 3.Excute "fdisk /dev/md0" to create a partion on /dev/md0. 4.mdadm -f /dev/md0 /dev/sdc;mdadm -r /dev/md0 /dev/sdc 5.Shutdown and install system. Actual results: It will report exception. Expected results: Installation success. Additional info: If remove sdc won't reproduce the problem,then try to redo the"Steps to Reproduce" but remove sdb instead of sdc. I also tried to install Fedora28 aside from rehat7.5.There was no problem with Fedora28. During the installation of Fedora28 I switched to commanline and found a mdadm configure file--mdadm.conf under directory /etc/,but I didn't find it in redhat7.5. So I added the the same mdadm.conf to squashfs.img of redhat7.5 and recreate iso.The was not exception report any more when I installed system with new iso.