Description of problem: My guess how this happened: I have a firmware RAID with 2 disks. Then I go to BIOS and switch RAID mode to AHCI. But the disk RAID signatures are not wiped. Then I boot F23 netinst and anaconda crashes. Version-Release number of selected component: anaconda-23.19.10-1 The following was filed automatically by anaconda: anaconda 23.19.10-1 exception report Traceback (most recent call first): File "/usr/lib/python3.4/site-packages/gi/overrides/BlockDev.py", line 416, in wrapped raise transform[1](msg) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1249, in handleUdevDMRaidMemberFormat rs_names = blockdev.dm.get_member_raid_sets(uuid, name, major, minor) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1474, in handleUdevDeviceFormat self.handleUdevDMRaidMemberFormat(info, device) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 764, in addUdevDevice self.handleUdevDeviceFormat(info, device) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1692, in _populate self.addUdevDevice(dev) File "/usr/lib/python3.4/site-packages/blivet/populator.py", line 1623, in populate self._populate() File "/usr/lib/python3.4/site-packages/blivet/devicetree.py", line 554, in populate self._populator.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python3.4/site-packages/blivet/blivet.py", line 279, in reset self.devicetree.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python3.4/site-packages/blivet/osinstall.py", line 1157, in storageInitialize storage.reset() File "/usr/lib/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.4/site-packages/pyanaconda/threads.py", line 253, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib/python3.4/site-packages/pyanaconda/threads.py", line 171, in raise_if_error raise exc_info[0](exc_info[1]).with_traceback(exc_info[2]) File "/usr/lib/python3.4/site-packages/pyanaconda/threads.py", line 116, in wait self.raise_if_error(name) File "/usr/lib/python3.4/site-packages/pyanaconda/timezone.py", line 76, in time_initialize threadMgr.wait(THREAD_STORAGE) File "/usr/lib/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.4/site-packages/pyanaconda/threads.py", line 253, in run threading.Thread.run(self, *args, **kwargs) gi.overrides.BlockDev.DMError: Failed to group_set Additional info: addons: com_redhat_kdump cmdline: /usr/bin/python3 /sbin/anaconda cmdline_file: BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-WS-23-i386 quiet dnf.rpm.log: Oct 30 07:53:17 INFO --- logging initialized --- executable: /sbin/anaconda hashmarkername: anaconda kernel: 4.2.3-300.fc23.i686 product: Fedora release: Cannot get release name. type: anaconda version: 23
Created attachment 1087837 [details] File: anaconda-tb
Created attachment 1087838 [details] File: anaconda.log
Created attachment 1087839 [details] File: dnf.log
Created attachment 1087840 [details] File: environ
Created attachment 1087841 [details] File: lsblk_output
Created attachment 1087842 [details] File: nmcli_dev_list
Created attachment 1087843 [details] File: os_info
Created attachment 1087844 [details] File: program.log
Created attachment 1087845 [details] File: storage.log
Created attachment 1087846 [details] File: syslog
Created attachment 1087847 [details] File: ifcfg.log
Created attachment 1087848 [details] File: packaging.log
IIRC, we've generally taken the position that it's the sysadmin's job to deal with stale RAID metadata, not anaconda's, and anaconda isn't expected to handle installing to a single disk split out of a RAID set like this. It'd be nice if it didn't crash, but I'm -1 blocker. I can probably pull out some precedents if we need 'em.
Created attachment 1087853 [details] anaconda-tb-7nkbjx7t There are two tracebacks in /tmp, this is the second one. I also see two "anaconda crashed" windows, one of them active, the second inactive, but overlaid in the middle of the screen and can't be put away.
After running "wipefs -a" on both disks, anaconda booted fine the next time. It would be great if a) anaconda did not crash b) either advised me to wipe the disks manually or offered to do that for me. Otherwise the user has no idea what's wrong and how to fix it.
(In reply to awilliam from comment #13) > It'd be nice if it didn't crash, but I'm -1 blocker. I can probably pull out > some precedents if we need 'em. I'm sure, that why I even haven't proposed this :)
This message is a reminder that Fedora 23 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 23. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '23'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 23 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.