Description of problem: Boot into netinst and recieved error. Tried various boot options, modprobe.blacklist=nouveau inst.geoloc=0 amd setting a custom resolution. These options did not work. Version-Release number of selected component: anaconda-28.22.10 The following was filed automatically by anaconda: anaconda 28.22.10 exception report Traceback (most recent call first): File "/usr/lib64/python3.6/site-packages/gi/overrides/BlockDev.py", line 824, in wrapped raise transform[1](msg) File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/dmraid.py", line 56, in run rs_names = blockdev.dm.get_member_raid_sets(name, uuid, major, minor) File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 303, in handle_format helper_class(self, info, device).run() File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 275, in handle_device self.handle_format(info, device) File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 462, in _populate self.handle_device(dev) File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 412, in populate self._populate() File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.6/site-packages/blivet/blivet.py", line 161, in reset self.devicetree.populate(cleanup_only=cleanup_only) File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib64/python3.6/site-packages/pyanaconda/storage/osinstall.py", line 1670, in reset super().reset(cleanup_only=cleanup_only) File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib64/python3.6/site-packages/pyanaconda/storage/osinstall.py", line 2193, in storage_initialize storage.reset() File "/usr/lib64/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/usr/lib64/python3.6/site-packages/pyanaconda/threading.py", line 291, in run threading.Thread.run(self) gi.overrides.BlockDev.DMError: No RAIDs discovered Additional info: addons: com_redhat_docker, com_redhat_kdump cmdline: /usr/bin/python3 /sbin/anaconda cmdline_file: BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-WS-dvd-x86_64-28 modprobe.blacklist=nouveau inst.geoloc=0 quiet executable: /sbin/anaconda hashmarkername: anaconda kernel: 4.16.3-301.fc28.x86_64 product: Fedora release: Cannot get release name. type: anaconda version: 28
Created attachment 1431650 [details] File: anaconda-tb
Created attachment 1431651 [details] File: anaconda.log
Created attachment 1431652 [details] File: dbus.log
Created attachment 1431653 [details] File: dnf.librepo.log
Created attachment 1431654 [details] File: environ
Created attachment 1431655 [details] File: hawkey.log
Created attachment 1431656 [details] File: lorax-packages.log
Created attachment 1431657 [details] File: lsblk_output
Created attachment 1431658 [details] File: nmcli_dev_list
Created attachment 1431659 [details] File: os_info
Created attachment 1431660 [details] File: program.log
Created attachment 1431661 [details] File: storage.log
Created attachment 1431662 [details] File: syslog
Created attachment 1431663 [details] File: ifcfg.log
Created attachment 1431664 [details] File: packaging.log
Based on the traceback this seems to me like an issue in our storage library. Changing component.
Looks like there are two DDF (FW) RAID disks in the system. Please give the 'inst.nodmraid' boot option a try. If that doesn't help, you could also use a kickstart file with a 'ignoredisk' command [1] to ignore these two disks. [1] https://pykickstart.readthedocs.io/en/latest/kickstart-docs.html#ignoredisk OR Are these part of any FW RAID that is being used? If not, could you remove the metadata from those devices ('sudo wipefs -a') before the installation? <--- THIS WILL CAUSE LOSS OF DATA ON THOSE TWO DISKS!!!!
This message is a reminder that Fedora 28 is nearing its end of life. On 2019-May-28 Fedora will stop maintaining and issuing updates for Fedora 28. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '28'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 28 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 28 changed to end-of-life (EOL) status on 2019-05-28. Fedora 28 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.