Description of problem: attempted install over system with pre-existing RAID config (mdadm) The following was filed automatically by anaconda: anaconda 19.30-1 exception report Traceback (most recent call first): File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 278, in name_from_md_node raise MDRaidError("name_from_md_node(%s) failed" % node) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2190, in resolveDevice md_name = devicelibs.mdraid.name_from_md_node(devspec[5:]) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 2879, in parseFSTab options=options) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 2802, in findExistingInstallations (mounts, swaps) = parseFSTab(devicetree, chroot=ROOT_PATH) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 420, in reset self.roots = findExistingInstallations(self.devicetree) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 140, in storageInitialize storage.reset() File "/usr/lib64/python2.7/threading.py", line 766, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 168, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 87, in wait self.raise_error(name) File "/usr/lib64/python2.7/site-packages/pyanaconda/packaging/__init__.py", line 657, in payloadInitialize threadMgr.wait(THREAD_STORAGE) File "/usr/lib64/python2.7/threading.py", line 766, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 168, in run threading.Thread.run(self, *args, **kwargs) MDRaidError: name_from_md_node(md0) failed Version-Release number of selected component: anaconda-19.30-1 Additional info: reporter: libreport-2.1.4 cmdline: /usr/bin/python /sbin/anaconda cmdline_file: initrd=initrd.img inst.stage2=hd:LABEL=Fedora\x2019-Beta\x20x86_64 quiet BOOT_IMAGE=vmlinuz executable: /sbin/anaconda hashmarkername: anaconda kernel: 3.9.2-301.fc19.x86_64 product: Fedora release: Cannot get release name. type: anaconda version: 19-Beta Truncated backtrace: Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 168, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python2.7/threading.py", line 766, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/packaging/__init__.py", line 657, in payloadInitialize threadMgr.wait(THREAD_STORAGE) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 87, in wait self.raise_error(name) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 168, in run threading.Thread.run(self, *args, **kwargs) File "/usr/lib64/python2.7/threading.py", line 766, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 140, in storageInitialize storage.reset() File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 420, in reset self.roots = findExistingInstallations(self.devicetree) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 2802, in findExistingInstallations (mounts, swaps) = parseFSTab(devicetree, chroot=ROOT_PATH) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 2879, in parseFSTab options=options) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2190, in resolveDevice md_name = devicelibs.mdraid.name_from_md_node(devspec[5:]) File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 278, in name_from_md_node raise MDRaidError("name_from_md_node(%s) failed" % node) MDRaidError: name_from_md_node(md0) failed
Created attachment 759006 [details] File: anaconda-tb
Created attachment 759007 [details] File: anaconda.log
Created attachment 759008 [details] File: backtrace
Created attachment 759009 [details] File: environ
Created attachment 759010 [details] File: ifcfg.log
Created attachment 759011 [details] File: lsblk_output
Created attachment 759012 [details] File: nmcli_dev_list
Created attachment 759013 [details] File: packaging.log
Created attachment 759014 [details] File: program.log
Created attachment 759015 [details] File: storage.log
Created attachment 759016 [details] File: syslog
This happened while attempting to install Fedora 19-beta over the top of an existing, old Ubuntu system. The old system had been jumbled up somewhat; the disk config used to be: disk A \___ separate partitions in RAID1 (/boot, /, swap) disk B / One of the drives failed, and the new configuration (keeping the same naming scheme as above) is: disk B disk C disk D \___ RAID1 with LVM (1 PV, 1 VG, 1 LV) disk E / I wanted to install Fedora over the top of disks B and C, as a complete overwrite. I'm going to try clearing out the mdadm config from disk B, and unplugging disks D and E, and try the installation again.
Deleting the half-mirror partitions from disk B and attempting re-install worked around the issue, but in order to do this, I did: - boot to the Troubleshooting mode of the installer and get a shell - I deleted the partitions via fdisk - I got paranoid and dd'd 100MB of zeros over the front of the disk # sync; sync; reboot Then after restarting the installer was happy.
Attempt to install Fedora 19 RC2 to an X4150 Sun server. cmdline: /usr/bin/python /sbin/anaconda cmdline_file: initrd=initrd.img inst.stage2=hd:LABEL=Fedora\x2019\x20x86_64 xdriver=vesa nomodeset quiet BOOT_IMAGE=vmlinuz hashmarkername: anaconda kernel: 3.9.5-301.fc19.x86_64 package: anaconda-19.30.12-1 product: Fedora reason: MDRaidError: name_from_md_node(md0) failed release: Cannot get release name. version: 19
I've just run into this when trying to install F19 from the Install DVD on top of an already-existing RAID configuration. md0: /dev/sda1, /dev/sdb1 (used for /boot) md1: /dev/sda3, /dev/sdb3 (used for everything else sans swap) I've been trying to convince the automated bug reporter to upload its logs somewhere, but it's been fighting with me.
Created attachment 778495 [details] anaconda-tb-f19_release Here's the Anaconda traceback from my instance of this
Created attachment 778497 [details] syslog-f19_release And here's syslog, which looks to be extremely informative. Namely, at 28s in, the kernel correctly scans and finds the two RAID arrays: 19:38:26,307 INFO kernel:[ 28.708307] md/raid1:md1: active with 2 out of 2 mirrors 19:38:26,307 INFO kernel:[ 28.708328] md1: detected capacity change from 0 to 997397692416 19:38:26,307 INFO kernel:[ 28.744770] md/raid1:md0: active with 2 out of 2 mirrors 19:38:26,307 INFO kernel:[ 28.744788] md0: detected capacity change from 0 to 524222464 Then after awhile, some process runs into an SELinux violation, which ends up disabling both of the arrays: 19:38:49,885 NOTICE kernel:[ 52.814935] type=1400 audit(1374781129.883:21): avc: denied { read write } for pid=914 comm="mdadm" path="/dev/mapper/control" dev="devtmpfs" ino=9050 scontext=system_u:system_r:mdadm_t:s0 tcontext=system_u:object_r:lvm_control_t:s0 tclass=chr_file 19:38:49,966 NOTICE kernel:[ 52.895897] type=1400 audit(1374781129.964:22): avc: denied { read write } for pid=918 comm="mdadm" path="/dev/mapper/control" dev="devtmpfs" ino=9050 scontext=system_u:system_r:mdadm_t:s0 tcontext=system_u:object_r:lvm_control_t:s0 tclass=chr_file 19:38:53,878 INFO kernel:[ 56.807874] md0: detected capacity change from 524222464 to 0 19:38:53,878 INFO kernel:[ 56.807882] md: md0 stopped. 19:38:54,396 INFO kernel:[ 57.325675] md1: detected capacity change from 997397692416 to 0 19:38:54,396 INFO kernel:[ 57.325684] md: md1 stopped. It ends up being followed by a bunch of: 19:38:58,962 ERR kernel:[ 61.891739] device-mapper: table: 253:1: linear: dm-linear: Device lookup failed 19:38:58,962 WARNING kernel:[ 61.891745] device-mapper: ioctl: error adding target to table Interestingly, one of the arrays DOES seem to have come back online - I can "cat /proc/mdstat" from the command line and see that "md1" is currently active, but md0 isn't. At any rate, unless it's a red herring, it looks like selinux is getting in the way of the installer being able to Do Its Thing.
*** This bug has been marked as a duplicate of bug 981991 ***