Bug 965299

Summary: OSError: [Errno 2] No such file or directory: '/dev/md'
Product: [Fedora] Fedora Reporter: Gabriel Ramirez <gabriello.ramirez>
Component: anacondaAssignee: David Lehman <dlehman>
Status: CLOSED ERRATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 19CC: anaconda-maint-list, chris, dshea, gabriello.ramirez, g.kaviyarasu, jonathan, mkolman, sbueno, vanmeeuwen+fedora
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: abrt_hash:3c9afb24b2a507bce715c2f51a2c4d611113b674a9d3a0d676af694873513146
Fixed In Version: pykickstart-1.99.32-1.fc19 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-06-18 06:16:29 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
File: anaconda-tb
none
File: anaconda.log
none
File: backtrace
none
File: environ
none
File: ifcfg.log
none
File: ks.cfg
none
File: lsblk_output
none
File: nmcli_dev_list
none
File: packaging.log
none
File: program.log
none
File: storage.log
none
File: syslog
none
anaconda exception report comment 15 none

Description Gabriel Ramirez 2013-05-20 20:41:42 UTC
Description of problem:
I tried to install Fedora 19 Beta TC2 in two existing raid1 (linux kernel software raid) via kickstart:

so after anaconda started I switched to a terminal  an typed:

cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]  [raid10] [linear]
unused devices : <none>

mdadm --assemble /dev/md0 /dev/vda2 /dev/vdb2
mdadm: /dev/md0 has been  started with two drives.

mdadm --assemble /dev/md1 /dev/vda3 /dev/vdb3
mdadm: /dev/md1 has been  started with two drives.

cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]  [raid10] [linear]
md1 : active raid1 vda3[0] vdb3[1]
          12067751 blocks super 1.2 [2/2] [UU]

md0 : active raid1 vda2[0] vdb2[1]
          511936 blocks super 1.2 [2/2] [UU]

unused devices : <none>

ls /dev/md*
/dev/md0 /dev/md1 /dev/md1p1 /dev/md1p2

after that switched back to anaconda via alt-F7

and tried to refresh the disks to show the raids partitions and anaconda showed the error

thanks, 

Gabriel
The following was filed automatically by anaconda:
anaconda 19.28-1 exception report
Traceback (most recent call first):
  File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 269, in name_from_md_node
    for link in os.listdir(md_dir):
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 768, in addUdevPartitionDevice
    name = devicelibs.mdraid.name_from_md_node(name)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1054, in addUdevDevice
    device = self.addUdevPartitionDevice(info)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1874, in _populate
    self.addUdevDevice(dev)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1818, in populate
    self._populate()
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 409, in reset
    self.devicetree.populate(cleanupOnly=cleanupOnly)
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 140, in storageInitialize
    storage.reset()
  File "/usr/lib64/python2.7/threading.py", line 766, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 168, in run
    threading.Thread.run(self, *args, **kwargs)
OSError: [Errno 2] No such file or directory: '/dev/md'

Version-Release number of selected component:
anaconda-19.28-1

Additional info:
reporter:       libreport-2.1.4
cmdline:        /usr/bin/python  /sbin/anaconda
executable:     /sbin/anaconda
hashmarkername: anaconda
kernel:         3.9.2-301.fc19.x86_64
product:        Fedora
release:        Cannot get release name.
type:           anaconda
version:        19-Beta

Truncated backtrace:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 168, in run
    threading.Thread.run(self, *args, **kwargs)
  File "/usr/lib64/python2.7/threading.py", line 766, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 140, in storageInitialize
    storage.reset()
  File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 409, in reset
    self.devicetree.populate(cleanupOnly=cleanupOnly)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1818, in populate
    self._populate()
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1874, in _populate
    self.addUdevDevice(dev)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1054, in addUdevDevice
    device = self.addUdevPartitionDevice(info)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 768, in addUdevPartitionDevice
    name = devicelibs.mdraid.name_from_md_node(name)
  File "/usr/lib/python2.7/site-packages/blivet/devicelibs/mdraid.py", line 269, in name_from_md_node
    for link in os.listdir(md_dir):
OSError: [Errno 2] No such file or directory: '/dev/md'

Comment 1 Gabriel Ramirez 2013-05-20 20:42:09 UTC
Created attachment 750710 [details]
File: anaconda-tb

Comment 2 Gabriel Ramirez 2013-05-20 20:42:13 UTC
Created attachment 750711 [details]
File: anaconda.log

Comment 3 Gabriel Ramirez 2013-05-20 20:42:16 UTC
Created attachment 750712 [details]
File: backtrace

Comment 4 Gabriel Ramirez 2013-05-20 20:42:20 UTC
Created attachment 750713 [details]
File: environ

Comment 5 Gabriel Ramirez 2013-05-20 20:42:23 UTC
Created attachment 750714 [details]
File: ifcfg.log

Comment 6 Gabriel Ramirez 2013-05-20 20:42:26 UTC
Created attachment 750715 [details]
File: ks.cfg

Comment 7 Gabriel Ramirez 2013-05-20 20:42:30 UTC
Created attachment 750716 [details]
File: lsblk_output

Comment 8 Gabriel Ramirez 2013-05-20 20:42:33 UTC
Created attachment 750717 [details]
File: nmcli_dev_list

Comment 9 Gabriel Ramirez 2013-05-20 20:42:38 UTC
Created attachment 750718 [details]
File: packaging.log

Comment 10 Gabriel Ramirez 2013-05-20 20:42:42 UTC
Created attachment 750719 [details]
File: program.log

Comment 11 Gabriel Ramirez 2013-05-20 20:42:53 UTC
Created attachment 750720 [details]
File: storage.log

Comment 12 Gabriel Ramirez 2013-05-20 20:42:59 UTC
Created attachment 750721 [details]
File: syslog

Comment 13 David Lehman 2013-05-20 20:56:08 UTC
Please try it again, but skip the part where you go to the shell and manually activate your md arrays. Thanks.

Comment 14 Gabriel Ramirez 2013-05-21 03:43:21 UTC
(In reply to David Lehman from comment #13)
> Please try it again, but skip the part where you go to the shell and
> manually activate your md arrays. Thanks.

if in the kickstart have the following:

raid /boot --level=1 --noformat --device=md1 /dev/vda3 /dev/vdb3
raid / --level=1 --noformat --device=md2 /dev/vda4 /dev/vdb4
raid /var/lib/lh/syslinux --level=1 --noformat --device=md0 /dev/vda2 /dev/vdb2
raid swap --level=1 --noformat --device=md3 /dev/vda5 /dev/vdb5

the installer shows in text mode:

Members may not be specified for prexisting device

Pane is dead

cat /proc/mdstat

md124 : active (auto-read-only) raid1 vda5[0] vdb5[1]
      3143616 blocks super 1.2 [2/2] [UU]

md125 : active (auto-read-only) raid1 vda3[0] vdb3[1]
      522176 blocks [2/2] [UU]

md126 : active (auto-read-only) raid1 vda4[0] vdb4[1]
      20955008 blocks super 1.2 [2/2] [UU]

md127 : active (auto-read-only) raid1 vda2[0] vdb2[1]
      51136 blocks [2/2] [UU]

unused devices: <none>

Comment 15 Gabriel Ramirez 2013-05-21 03:48:02 UTC
(In reply to Gabriel Ramirez from comment #14)
btw I'm using Fedora 19 Beta RC2  I misstyped TC2 inthe first comment

> (In reply to David Lehman from comment #13)

if in the kickstart have the following (the md devices from the previous run):

raid /boot --level=1 --noformat --device=md125
raid / --level=1 --noformat --device=md126
raid /var/lib/lh/syslinux --level=1 --noformat --device=md127
raid swap --level=1 --noformat --device=md124

the installer start in graphical mode but shows in the left corner

No disks selected 

in a virtual terminal cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear]
unused devices: <none>

after a while the installer shows:

An unknown error ocurred and the log shows:

anaconda 19.28-1 exception report
Traceback (most recent call first):
  File "/usr/lib64/python2.7/site-packages/pyanaconda/kickstart.py", line 1097, 
in execute
    raise KickstartValueError, formatErrorMsg(self.lineno, msg="No preexisting R
AID device with the name \"%s\" was found." % devicename)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/kickstart.py", line 1043, in execute
    r.execute(storage, ksdata, instClass)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/kickstart.py", line 1644, in doKickstartStorage
    ksdata.raid.execute(storage, ksdata, instClass)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/storage.py", line 390, in _doExecute
    doKickstartStorage(self.storage, self.data, self.instclass)
  File "/usr/lib64/python2.7/threading.py", line 766, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 168, in run
    threading.Thread.run(self, *args, **kwargs)
KickstartValueError: The following problem occurred on line 32 of the kickstart file:

No preexisting RAID device with the name "md125" was found.

Local variables in innermost frame:
devicename: md125
instClass: <pyanaconda.installclass.DefaultInstall object at 0x7f2e035c4b90>
devicetree: <blivet.devicetree.DeviceTree object at 0x7f2e035c4350>
self: raid /boot --device=125 --level=RAID1 --noformat --useexisting

storage: <blivet.Blivet object at 0x7f2e0c34a750>
dev: None
ksdata: #version=DEVEL

snip

# System bootloader configuration
bootloader --location=mbr --boot-drive=vda
raid /boot --device=125 --level=RAID1 --noformat --useexisting
raid / --device=126 --level=RAID1 --noformat --useexisting
raid /var/lib/lh/syslinux --device=127 --level=RAID1 --noformat --useexisting
raid swap --device=124 --level=RAID1 --noformat --useexisting

will attach the complete anaconda error file

thanks,

Gabriel

Comment 16 Gabriel Ramirez 2013-05-21 03:53:38 UTC
Created attachment 750838 [details]
anaconda exception report comment 15

Comment 17 Fedora Update System 2013-06-14 16:45:19 UTC
pykickstart-1.99.32-1.fc19,python-blivet-0.16-1.fc19,anaconda-19.30.6-1.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/pykickstart-1.99.32-1.fc19,python-blivet-0.16-1.fc19,anaconda-19.30.6-1.fc19

Comment 18 Fedora Update System 2013-06-15 17:06:59 UTC
Package anaconda-19.30.7-1.fc19, pykickstart-1.99.32-1.fc19, python-blivet-0.16-1.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing anaconda-19.30.7-1.fc19 pykickstart-1.99.32-1.fc19 python-blivet-0.16-1.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-10913/pykickstart-1.99.32-1.fc19,python-blivet-0.16-1.fc19,anaconda-19.30.7-1.fc19
then log in and leave karma (feedback).

Comment 19 Gabriel Ramirez 2013-06-18 03:18:00 UTC
thanks, in current Fedora 19 TC3 is solved

Comment 20 Fedora Update System 2013-06-18 06:16:29 UTC
pykickstart-1.99.32-1.fc19, python-blivet-0.16-1.fc19, anaconda-19.30.8-1.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 21 Chris Hubick 2013-07-14 23:05:27 UTC
I don't know if the cause is the same as this bug's, but my Fedora 19 release installer just crashed twice in a row with the same "anaconda 19.28-1 exception report" at the same line 269 in mdraid.py in name_from_md_node :(

Comment 22 Chris Hubick 2013-11-13 22:50:58 UTC
For the record, my problem was that described in https://bugzilla.redhat.com/show_bug.cgi?id=975811#c12