Description of problem: after dialog phase, and after clicking button to start own installation, and (according to what I see at screen) in phase "Setting up the installation environment", this crash occured. Not sure when it can relate to this crash, but previously created SW md RAID1 devices /dev/md{0,2,3,4,5} was in state when md0, md2,md3 and md4 was synchronized and md5 was in resync state. Version-Release number of selected component: anaconda-19.30.13-1 The following was filed automatically by anaconda: anaconda 19.30.13-1 exception report Traceback (most recent call first): File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 217, in mdactivate raise MDRaidError("mdactivate failed for %s: %s" % (device, msg)) File "/usr/lib/python2.7/site-packages/blivet/devices.py", line 3166, in _setup uuid=self.uuid) File "/usr/lib/python2.7/site-packages/blivet/devices.py", line 717, in setup self._setup(orig=orig) File "/usr/lib/python2.7/site-packages/blivet/deviceaction.py", line 526, in execute self.device.setup(orig=True) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 237, in processActions action.execute() File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 310, in doIt self.devicetree.processActions() File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 169, in turnOnFilesystems storage.doIt() File "/usr/lib/python2.7/site-packages/pyanaconda/install.py", line 140, in doInstall turnOnFilesystems(storage, mountOnly=flags.flags.dirInstall) File "/usr/lib/python2.7/threading.py", line 764, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib/python2.7/site-packages/pyanaconda/threads.py", line 168, in run threading.Thread.run(self, *args, **kwargs) MDRaidError: mdactivate failed for /dev/md/5: running mdadm --assemble /dev/md/5 --uuid=04299f20:9b8e9af8:0df762b0:a5bc14bd --run /dev/sda7 /dev/sdb7 failed Additional info: cmdline: /usr/bin/python /sbin/anaconda cmdline_file: initrd=initrd.img inst.stage2=hd:LABEL=Fedora\x2019\x20i386 repo=nfs:ws22:/mnt/ARCHIV/dist/RH/fedora/19/i386/os vnc vncpassword=server BOOT_IMAGE=vmlinuz executable: /sbin/anaconda hashmarkername: anaconda kernel: 3.9.5-301.fc19.i686 product: Fedora release: Cannot get release name. type: anaconda version: 19
Created attachment 774614 [details] File: anaconda.log
Created attachment 774615 [details] File: environ
Created attachment 774616 [details] File: lsblk_output
Created attachment 774617 [details] File: nmcli_dev_list
Created attachment 774618 [details] File: os_info
Created attachment 774619 [details] File: program.log
Created attachment 774620 [details] File: storage.log
Created attachment 774621 [details] File: syslog
Created attachment 774622 [details] File: ifcfg.log
Created attachment 774623 [details] File: packaging.log
it seems as this crash occurs when SW md RAIDs are created in early stage of instalation (with mdadm from tty2 console) - kernel will begin syncing them, and then when clicking to 'Rescan disks' in manual partitioning dialogs - anaconda rightly detect RAIDs, but IMO neglect that they are active. When RAIDs are manualy created and system is restarted (again to anaconda/installation), then anaconda rightly detect RAIDs and assemble them and installation pass fine. I do not test it, but maybe manually stopping RAIDs after manually create them and then click to rescan disks should help too. But this likely will not work with arrays without superblock - as isn't possible determine RAIDs configuration. Thus IMO anaconda should be able cope with state when it have in rescan time running md RAIDs.
You could make your life easier by giving names to your md arrays since you are using the new metadata format that allows it. That gives them a persistent name instead of them getting whatever minor is available when they are started and makes them much easier to manage.
Please post the contents of /sys/class/block/md5/md/array_state. Thanks.