Bug 1279951
Summary: | pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | ISO <a.hussien> | ||||||||||||
Component: | dracut | Assignee: | dracut-maint-list | ||||||||||||
Status: | CLOSED EOL | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||||||||||
Severity: | unspecified | Docs Contact: | |||||||||||||
Priority: | unspecified | ||||||||||||||
Version: | 23 | CC: | dracut-maint-list, g.kaviyarasu, harald, jonathan, vanmeeuwen+fedora, zbyszek | ||||||||||||
Target Milestone: | --- | ||||||||||||||
Target Release: | --- | ||||||||||||||
Hardware: | x86_64 | ||||||||||||||
OS: | Linux | ||||||||||||||
Whiteboard: | abrt_hash:88cde4551878b87a6935c32588d51ef550e043433ac94b7a008a8cfc5d70d057;VARIANT_ID=workstation; | ||||||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2016-12-20 15:36:34 UTC | Type: | --- | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Attachments: |
|
Description
ISO
2015-11-10 15:05:11 UTC
Created attachment 1092292 [details]
File: anaconda-tb
Created attachment 1092293 [details]
File: anaconda.log
Created attachment 1092294 [details]
File: environ
Created attachment 1092295 [details]
File: journalctl
No device file exists under /dev/mapper for the RAID array. Based on lsmod, dm-raid is not loaded /dev/mapper/live-base should have been setup by dmsquash-live-root in dracut before anaconda runs. Since this is a live install the anaconda-dracut modules are not a factor. Please add "debug rd.debug" to the kernel command line and attach the journal log. Created attachment 1093825 [details]
A gzipped tar archive of /var/log created with "debug rd.debug" added to the kernel command line.
This file was created with the IOMMU disabled from the firmware.
I have another archive with the IOMMU enabled. I can upload it, if you think it might be useful. One other issue I found with this machine. Even though installation completes successfully each time, "Test this media and Install Fedora" boot menu option fails at 4.8%, if the test starts checking the media on the "/dev/disk/by-label/Fedora-Live-WS-x86_64-23-10" device file (which happens about 95% of the time). After the failure, the system is halted, which prevents me from getting any further information. In the other 5%, the test starts checking the media on the "/dev/sde" device file, in which case, the test completes successfully. I could not identify any pattern in the test choosing one device file over the other. I don't believe this is related to the issue with the fakeRAID, since it happens regardless of the presence of the RAID configuration, or lack thereof. but I thought I should mention it, in case it is some how relevant. ( I don't believe I am experienced enough to judge that). I will create another bug report with this issue, just in case. Looks like this may be a blivet/libblockdev problem, buried in the anaconda-tb is this: Nov 10 17:43:05 localhost gnome-session[1772]: ERROR: pdc: zero sectors on /dev/sdd Nov 10 17:43:05 localhost gnome-session[1772]: ERROR: pdc: setting up RAID device /dev/sdd Nov 10 17:43:05 localhost gnome-session[1772]: ERROR: pdc: zero sectors on /dev/sdc Nov 10 17:43:05 localhost gnome-session[1772]: ERROR: pdc: setting up RAID device /dev/sdc Nov 10 17:43:05 localhost gnome-session[1772]: ERROR: pdc: zero sectors on /dev/sdb Nov 10 17:43:05 localhost gnome-session[1772]: ERROR: pdc: setting up RAID device /dev/sdb File "/usr/lib64/python3.4/site-packages/gi/overrides/BlockDev.py", line 395, in wrapped ret = orig_obj(*args, **kwargs) File "/usr/lib64/python3.4/site-packages/gi/overrides/BlockDev.py", line 153, in dm_get_member_raid_sets return _dm_get_member_raid_sets(name, uuid, major, minor) GLib.Error: g-bd-dm-error-quark: No RAIDs discovered (4) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib64/python3.4/site-packages/pyanaconda/threads.py", line 253, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.4/site-packages/blivet/osinstall.py", line 1157, in storageInitialize storage.reset() File "/usr/lib/python3.4/site-packages/blivet/blivet.py", line 279, in reset self.devicetree.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python3.4/site-packages/blivet/devicetree.py", line 554, in populate self._populator.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1623, in populate self._populate() File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1692, in _populate self.addUdevDevice(dev) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 764, in addUdevDevice self.handleUdevDeviceFormat(info, device) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1474, in handleUdevDeviceFormat self.handleUdevDMRaidMemberFormat(info, device) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1249, in handleUdevDMRaidMemberFormat rs_names = blockdev.dm.get_member_raid_sets(uuid, name, major, minor) File "/usr/lib64/python3.4/site-packages/gi/overrides/BlockDev.py", line 416, in wrapped raise transform[1](msg) gi.overrides.BlockDev.DMError: No RAIDs discovered File "/usr/lib64/python3.4/site-packages/pyanaconda/threads.py", line 253, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/lib64/python3.4/site-packages/pyanaconda/timezone.py", line 76, in time_initialize threadMgr.wait(THREAD_STORAGE) File "/usr/lib64/python3.4/site-packages/pyanaconda/threads.py", line 116, in wait self.raise_if_error(name) File "/usr/lib64/python3.4/site-packages/pyanaconda/threads.py", line 171, in raise_if_error raise exc_info[0](exc_info[1]).with_traceback(exc_info[2]) File "/usr/lib64/python3.4/site-packages/pyanaconda/threads.py", line 253, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.4/site-packages/blivet/osinstall.py", line 1157, in storageInitialize storage.reset() File "/usr/lib/python3.4/site-packages/blivet/blivet.py", line 279, in reset self.devicetree.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python3.4/site-packages/blivet/devicetree.py", line 554, in populate self._populator.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1623, in populate self._populate() File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1692, in _populate self.addUdevDevice(dev) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 764, in addUdevDevice self.handleUdevDeviceFormat(info, device) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1474, in handleUdevDeviceFormat self.handleUdevDMRaidMemberFormat(info, device) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1249, in handleUdevDMRaidMemberFormat rs_names = blockdev.dm.get_member_raid_sets(uuid, name, major, minor) File "/usr/lib64/python3.4/site-packages/gi/overrides/BlockDev.py", line 416, in wrapped raise transform[1](msg) gi.overrides.BlockDev.DMError: No RAIDs discovered File "/usr/lib64/python3.4/site-packages/pyanaconda/threads.py", line 253, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/lib64/python3.4/site-packages/pyanaconda/packaging/__init__.py", line 1275, in _runThread payload.setup(storage, instClass) File "/usr/lib64/python3.4/site-packages/pyanaconda/packaging/livepayload.py", line 79, in setup raise PayloadInstallError("Unable to find osimg for %s" % self.data.method.partition) pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base I tried to install Fedora 24, with the raid controller configured as RAID 5, and I still got the same problem. This message is a reminder that Fedora 23 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 23. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '23'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 23 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed. |