Bug 179819

Summary: anaconda aborts; walks back to raid routines
Product: [Fedora] Fedora Reporter: Bruce Orchard <orchard>
Component: anacondaAssignee: Peter Jones <pjones>
Status: CLOSED RAWHIDE QA Contact: Mike McLean <mikem>
Severity: medium Docs Contact:
Priority: medium    
Version: 5   
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2006-02-27 22:33:02 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 150222    

Description Bruce Orchard 2006-02-03 05:50:27 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1

Description of problem:
Environment:  Dell XPS 400
Intel 945 chipset raid configured in mirroring (raid 1), 2 drives
(Dell shipped the drives in striping (rade 0) configuration; I changed it to mirroring.  I have not written anything on the disk since the change.)

The error is unhandled exception

I think it was getting ready to ask how to partition the disk drives.

End of traceback:
...
partedUtils.py, line 887
partedUtils.py, line 562
dmraid.pty, line 134
dmraid.pty, line 123
__init__.py, line 558
__init__.py, line 507




Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.Boot from install CD
2.Answer language, etc. questions
3.
  

Actual Results:  Unhandled exception

Expected Results:  The partition screen should have appeared


Additional info:

Comment 1 Jeremy Katz 2006-02-03 16:13:35 UTC
Can you please provide the full traceback?

Comment 2 Bruce Orchard 2006-02-04 04:29:22 UTC
Traceback:

gui.py  line 945
dispatch.py  line 144
dispatch.py  line 215
upgrade.py  line 50
upgrade.py  line 74
parted_utils.py  line 887
parted_utils.py  line 562
dmraid.py  line 134
dmraid.py  line 123
__init__.py  line 558
__init__.py  line 507



Comment 3 Jeremy Katz 2006-02-06 17:05:33 UTC
This is still not the full traceback -- please actually save the traceback
you're given and attach it here.  There is valuable context information that is
left out by just giving line numbers.

Comment 4 Bruce Orchard 2006-02-07 15:52:44 UTC
I haven't found a way to save the debugging output, including the traceback.  As
far as I can tell, the network has not been set up at the time of the abort so
the scp option does not work.


Anyway, here is some more of the debugging output:

Traceback:
...
File "/usr/lib/anaconda/partedUtils.py", line 887, in openDevices
self.startDmRaid()
File "/usr/lib/anaconda/partedUtils.py", line 562, in startDmRaid
dmList=dmraid.startAllRaid(driveList)
File "/usr/lib/anaconda/dmraid.py", line 134, in startAllRaid
startRaidDev(rs)
File "/usr/lib/anaconda/dmraid.py", line 123, in startRaidDev
rs.activate(mknod=True)
File "/usr/lib/python2.4/site-packages/block/__init__.py", line 558, in activate
map=self.map
File "/usr/lib/python2.4/site-packages/block/__init__.py", line 507, in get_map
print self.rs.dmTable
RuntimeError:  no mapping possible

local variables in innermost frame:
self: <block.RaidSet instance at 0xb7cbe94c>


Dispatcher instance, containing members:
intf:  InstallIngterface instance, containing members:
...
/tmp/syslog:
...
$a
/tmp/anaconda.log
...
starting dmraids
self.driveList(): ['sda','sdb']
DiskSet.skippedDisks: []
DiskSet.skippedDisks: []
starting all dmraids on drives ['sda', 'sdb']
scanning for dmraid on drives ['sda', 'sdb']
got raidset <block.RaidSet instance at 0xb7cbe94c> (sda sdb)
valid:  True found_devs: 0 total_devs: 0
adding mapper/isw_dgjddddaee it isys cache
adding sda to dmraid cache
removing sda from isys cache
adding sdb to dmraid cache
removing sdb from isys cache
starting raid <block.RaidSet instance at 0xb7cbe94c> with mknod=True


Comment 5 Jeremy Katz 2006-02-27 22:33:02 UTC
This should be fixed in tomorrow's rawhide