Description of problem: Booted 22 Beta RC1 Server DVD (dd'ed to USB) on a system with an existing Intel firmware RAID-0 set with a Fedora install on it. Version-Release number of selected component: anaconda-22.20.9-1 The following was filed automatically by anaconda: anaconda 22.20.9-1 exception report Traceback (most recent call first): File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 909, in addUdevPartitionDevice name = blockdev.md_name_from_node(name) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1225, in addUdevDevice device = self.addUdevPartitionDevice(info) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2194, in _populate self.addUdevDevice(dev) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2128, in populate self._populate() File "/usr/lib/python2.7/site-packages/blivet/blivet.py", line 277, in reset self.devicetree.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python2.7/site-packages/blivet/osinstall.py", line 1117, in storageInitialize storage.reset() File "/usr/lib64/python2.7/threading.py", line 766, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 244, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 115, in wait self.raise_if_error(name) File "/usr/lib64/python2.7/site-packages/pyanaconda/timezone.py", line 75, in time_initialize threadMgr.wait(THREAD_STORAGE) File "/usr/lib64/python2.7/threading.py", line 766, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 244, in run threading.Thread.run(self, *args, **kwargs) Error: g-bd-md-error-quark: No name found for the node 'md126p1' (2) Additional info: addons: com_redhat_kdump cmdline: /usr/bin/python2 /sbin/anaconda cmdline_file: BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-22_B-x86_64 quiet dnf.rpm.log: Apr 08 20:41:04 INFO --- logging initialized --- executable: /sbin/anaconda hashmarkername: anaconda kernel: 4.0.0-0.rc5.git4.1.fc22.x86_64 product: Fedora release: Cannot get release name. type: anaconda version: 22
Created attachment 1012387 [details] File: anaconda-tb
Created attachment 1012388 [details] File: anaconda.log
Created attachment 1012389 [details] File: dnf.log
Created attachment 1012390 [details] File: environ
Created attachment 1012391 [details] File: lsblk_output
Created attachment 1012392 [details] File: nmcli_dev_list
Created attachment 1012393 [details] File: os_info
Created attachment 1012394 [details] File: program.log
Created attachment 1012395 [details] File: storage.log
Created attachment 1012396 [details] File: syslog
Created attachment 1012397 [details] File: ifcfg.log
Created attachment 1012398 [details] File: packaging.log
This seemed to be reproducible with the set in the exact state I hit it with (at least, I booted twice and hit it both times), but after I manually wiped the set I can't reproduce it any more - I ran three installs in succession and didn't hit it on any of them, and after the third I can still boot the installer without hitting the crash. So I'm not quite sure what special state I got the set into, but this at least doesn't seem to be a clear showstopper.
(In reply to awilliam from comment #13) > This seemed to be reproducible with the set in the exact state I hit it with > (at least, I booted twice and hit it both times), but after I manually wiped > the set I can't reproduce it any more - I ran three installs in succession > and didn't hit it on any of them, and after the third I can still boot the > installer without hitting the crash. When you say "I manually wiped the set", what exactly did you do -- wipe filesystems, LVs, etc.; remove partitions; delete RAID volumes/sets; or something else?
Wiped the partitions with fdisk and created a new disk label.
(In reply to awilliam from comment #15) > Wiped the partitions with fdisk and created a new disk label. What do you want to bet that recreating the partitions brings back the crash? (I still believe that the root cause is that Anaconda is looking for symlinks in /dev/md/ before udev is done creating them. Partitions on top of MD RAID devices seem to take a particularly long time for udev to process for some reason.)
Well, that's what I was doing in #c13. So far as I can recall, all that was on the disk was a previous F22 install. So that's why I was installing over and over, but it doesn't seem to reproduce the bug again.
*** This bug has been marked as a duplicate of bug 1160424 ***
Looking at adam's logs here, what I see is something is stopping the fwraid from outside of anaconda/blivet. Here you can see (from syslog) where the system activates the fwraid: 20:40:52,900 INFO kernel:[ 18.720320] md: bind<sda> 20:40:52,900 INFO kernel:[ 18.729999] md: bind<sdb> 20:40:52,900 INFO kernel:[ 18.732877] md: bind<sdb> 20:40:52,900 INFO kernel:[ 18.733078] md: bind<sda> 20:40:52,900 INFO kernel:[ 18.736946] md/raid0:md126: md_size is 1953536000 sectors. 20:40:52,900 INFO kernel:[ 18.736950] md: RAID0 configuration for md126 - 1 zone 20:40:52,900 INFO kernel:[ 18.736951] md: zone0=[sda/sdb] 20:40:52,900 INFO kernel:[ 18.736955] zone-offset= 0KB, device-offset= 0KB, size= 976768256KB 20:40:52,900 INFO kernel:[ 18.736956] 20:40:52,900 INFO kernel:[ 18.736974] md126: detected capacity change from 0 to 1000210432000 20:40:52,900 INFO kernel:[ 18.762846] md126: p1 p2 p3 <snip> 20:40:52,900 INFO kernel:[ 18.878739] md: export_rdev(sdb) 20:40:52,900 INFO kernel:[ 18.878781] md: export_rdev(sda) 20:40:52,900 INFO kernel:[ 18.973909] md: export_rdev(sdb) 20:40:52,900 INFO kernel:[ 18.973942] md: export_rdev(sda) And then, inexplicably, there's this: 20:41:05,023 INFO kernel:[ 41.519132] sda: sda1 sda2 sda3 20:41:05,023 WARNING kernel:[ 41.519135] sda: partition table partially beyond EOD, truncated 20:41:05,023 WARNING kernel:[ 41.519426] sda: p2 size 1948653568 extends beyond EOD, truncated 20:41:05,023 WARNING kernel:[ 41.519495] sda: p3 start 1949630464 is beyond EOD, truncated 20:41:05,096 WARNING kernel:[ 41.592055] Alternate GPT is invalid, using primary GPT. 20:41:05,096 INFO kernel:[ 41.592069] sdb: sdb1 sdb2 sdb3 <snip> 20:41:06,611 INFO kernel:[ 43.109073] md126: p1 p2 p3 To me, this looks like sda is disappearing and then reappearing. I can't comment on whether this should be handled transparently by the fwraid, but I can say that a disappearing array is going to be difficult to install to. Notice that there are no active md devices in the lsblk output at the bottom of program.log. I see nothing in blivet's logs to indicate that blivet deactivated the array.
*** Bug 1217666 has been marked as a duplicate of this bug. ***