Bug 1234333
| Summary: | Traceback when initializing blivet with MD RAID | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Jan Safranek <jsafrane> | ||||
| Component: | python-blivet | Assignee: | David Lehman <dlehman> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Release Test Team <release-test-team-automation> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 7.2 | CC: | jsafrane, mbanas, mhruscak | ||||
| Target Milestone: | rc | Keywords: | Regression | ||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | python-blivet-0.61.15.9-1 | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2015-11-19 08:48:12 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1186677 | ||||||
| Attachments: |
|
||||||
This is a regression since RHEL 7.1. Please attach the logs. Created attachment 1042131 [details]
blivet log (/dev/md127 = RAID1 vda1+vda2)
Adding logs. /dev/md127 (the problematic one) is RAID 1 composed of /dev/vda1 and /dev/vda2.
I have fixed this in my local 7.2 branch. *** Bug 1238056 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2232.html |
With a MD RAID on the system, blivet.reset() fails with traceback: File "blivet_init.py", line 27, in <module> b.reset() File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 478, in reset self.devicetree.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2143, in populate self._populate() File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2209, in _populate self.addUdevDevice(dev) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1258, in addUdevDevice self.handleUdevDeviceFormat(info, device) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1918, in handleUdevDeviceFormat self.handleUdevMDMemberFormat(info, device) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1674, in handleUdevMDMemberFormat self._addDevice(md_array) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 398, in _addDevice raise ValueError("device is already in tree") ValueError: device is already in tree Version-Release number of selected component (if applicable): python-blivet-0.61.15.5-1.el7.noarch How reproducible: always Steps to Reproduce: 1. create a MD RAID: mdadm -C -l 1 -n 2 /dev/md/test /dev/vda{1,2} 2. run this small script: import blivet b = blivet.Blivet() b.reset() Actual results: traceback Expected results: no error Additional info: Reproducible in recent nightly, RHEL-7.2-20150618.n.0, i.e. after blivet rebase.