Bug 1170755

Summary: MDRaidError: name_from_md_node(md126p1) failed [Intel BIOS RAID related?]
Product: [Fedora] Fedora Reporter: Ian Pilcher <ipilcher>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 21CC: anaconda-maint-list, g.kaviyarasu, jonathan, vanmeeuwen+fedora
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: abrt_hash:ec7bc45e7de9e1cdf2fa22400793be56dcd25d3ca295f79e2407962618197c0c
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-12-08 22:29:01 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
File: anaconda-tb
none
File: anaconda.log
none
File: environ
none
File: lsblk_output
none
File: nmcli_dev_list
none
File: os_info
none
File: program.log
none
File: storage.log
none
File: syslog
none
File: ifcfg.log
none
File: packaging.log none

Description Ian Pilcher 2014-12-04 19:01:42 UTC
Description of problem:
Boot Fedora 21 RC5 netinst ISO (from USB thumbdrive)

Version-Release number of selected component:
anaconda-21.48.21-1

The following was filed automatically by anaconda:
anaconda 21.48.21-1 exception report
Traceback (most recent call first):
  File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 360, in name_from_md_node
    raise MDRaidError("name_from_md_node(%s) failed" % node)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 909, in addUdevPartitionDevice
    name = mdraid.name_from_md_node(name)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1201, in addUdevDevice
    device = self.addUdevPartitionDevice(info)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2170, in _populate
    self.addUdevDevice(dev)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2105, in populate
    self._populate()
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 480, in reset
    self.devicetree.populate(cleanupOnly=cleanupOnly)
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 183, in storageInitialize
    storage.reset()
  File "/usr/lib64/python2.7/threading.py", line 766, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
    threading.Thread.run(self, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 112, in wait
    self.raise_if_error(name)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/timezone.py", line 75, in time_initialize
    threadMgr.wait(THREAD_STORAGE)
  File "/usr/lib64/python2.7/threading.py", line 766, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run
    threading.Thread.run(self, *args, **kwargs)
MDRaidError: name_from_md_node(md126p1) failed

Additional info:
addons:         com_redhat_kdump
cmdline:        /usr/bin/python  /sbin/anaconda
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-S-21-x86_64 rd.live.check net.ifnames=0
executable:     /sbin/anaconda
hashmarkername: anaconda
kernel:         3.17.4-301.fc21.x86_64
product:        Fedora"
release:        Cannot get release name.
type:           anaconda
version:        Fedora

Comment 1 Ian Pilcher 2014-12-04 19:01:44 UTC
Created attachment 964791 [details]
File: anaconda-tb

Comment 2 Ian Pilcher 2014-12-04 19:01:45 UTC
Created attachment 964792 [details]
File: anaconda.log

Comment 3 Ian Pilcher 2014-12-04 19:01:46 UTC
Created attachment 964793 [details]
File: environ

Comment 4 Ian Pilcher 2014-12-04 19:01:46 UTC
Created attachment 964794 [details]
File: lsblk_output

Comment 5 Ian Pilcher 2014-12-04 19:01:47 UTC
Created attachment 964795 [details]
File: nmcli_dev_list

Comment 6 Ian Pilcher 2014-12-04 19:01:48 UTC
Created attachment 964796 [details]
File: os_info

Comment 7 Ian Pilcher 2014-12-04 19:01:49 UTC
Created attachment 964797 [details]
File: program.log

Comment 8 Ian Pilcher 2014-12-04 19:01:50 UTC
Created attachment 964798 [details]
File: storage.log

Comment 9 Ian Pilcher 2014-12-04 19:01:51 UTC
Created attachment 964799 [details]
File: syslog

Comment 10 Ian Pilcher 2014-12-04 19:01:52 UTC
Created attachment 964800 [details]
File: ifcfg.log

Comment 11 Ian Pilcher 2014-12-04 19:01:52 UTC
Created attachment 964801 [details]
File: packaging.log

Comment 12 Ian Pilcher 2014-12-04 19:13:48 UTC
This (from the anaconda backtrace) makes me think that it's getting confused by the IMSM (Intel BIOS RAID) metadata device:


MDRaidError: name_from_md_node(md126p1) failed

Local variables in innermost frame:
node: md126p1
md_name: md127
name: None
link: imsm
full_path: /dev/md/imsm
md_dir: /dev/md

Comment 13 Ian Pilcher 2014-12-05 03:51:34 UTC
I did some playing around with running "anaconda --askmethod" from a
live CD, and the results were ... interesting.  At first, I got the
expected crash, so I edited /usr/lib/python2.7/site-packages/blivet/
devicelibs/mdraid.py and added some additional logging to both 
name_from_md_node and md_node_from_name.  Just to ensure that the
updated version was used, I also deleted the mdraid.pyc and mdraid.pyo.

Once I did this, the crash went away.

This suggests a couple of different root causes to me:

1. There was something messed up about the pre-compiled files.

2. (Far more likely) some sort of race condition with udev.  Partitions
   on top of MD RAID seem to take a particularly long time for all of
   the helper programs to run, so perhaps udev simply isn't finished
   creating all the symlinks that anaconda needs (until I slow anaconda
   down by adding the logging and/or removing the pre-compiled files).

Comment 14 David Shea 2014-12-08 22:29:01 UTC

*** This bug has been marked as a duplicate of bug 1160424 ***