Bug 179819 - anaconda aborts; walks back to raid routines
anaconda aborts; walks back to raid routines
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
5
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Peter Jones
Mike McLean
:
Depends On:
Blocks: FC5Blocker
  Show dependency treegraph
 
Reported: 2006-02-03 00:50 EST by Bruce Orchard
Modified: 2007-11-30 17:11 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-02-27 17:33:02 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Bruce Orchard 2006-02-03 00:50:27 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1

Description of problem:
Environment:  Dell XPS 400
Intel 945 chipset raid configured in mirroring (raid 1), 2 drives
(Dell shipped the drives in striping (rade 0) configuration; I changed it to mirroring.  I have not written anything on the disk since the change.)

The error is unhandled exception

I think it was getting ready to ask how to partition the disk drives.

End of traceback:
...
partedUtils.py, line 887
partedUtils.py, line 562
dmraid.pty, line 134
dmraid.pty, line 123
__init__.py, line 558
__init__.py, line 507




Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.Boot from install CD
2.Answer language, etc. questions
3.
  

Actual Results:  Unhandled exception

Expected Results:  The partition screen should have appeared


Additional info:
Comment 1 Jeremy Katz 2006-02-03 11:13:35 EST
Can you please provide the full traceback?
Comment 2 Bruce Orchard 2006-02-03 23:29:22 EST
Traceback:

gui.py  line 945
dispatch.py  line 144
dispatch.py  line 215
upgrade.py  line 50
upgrade.py  line 74
parted_utils.py  line 887
parted_utils.py  line 562
dmraid.py  line 134
dmraid.py  line 123
__init__.py  line 558
__init__.py  line 507

Comment 3 Jeremy Katz 2006-02-06 12:05:33 EST
This is still not the full traceback -- please actually save the traceback
you're given and attach it here.  There is valuable context information that is
left out by just giving line numbers.
Comment 4 Bruce Orchard 2006-02-07 10:52:44 EST
I haven't found a way to save the debugging output, including the traceback.  As
far as I can tell, the network has not been set up at the time of the abort so
the scp option does not work.


Anyway, here is some more of the debugging output:

Traceback:
...
File "/usr/lib/anaconda/partedUtils.py", line 887, in openDevices
self.startDmRaid()
File "/usr/lib/anaconda/partedUtils.py", line 562, in startDmRaid
dmList=dmraid.startAllRaid(driveList)
File "/usr/lib/anaconda/dmraid.py", line 134, in startAllRaid
startRaidDev(rs)
File "/usr/lib/anaconda/dmraid.py", line 123, in startRaidDev
rs.activate(mknod=True)
File "/usr/lib/python2.4/site-packages/block/__init__.py", line 558, in activate
map=self.map
File "/usr/lib/python2.4/site-packages/block/__init__.py", line 507, in get_map
print self.rs.dmTable
RuntimeError:  no mapping possible

local variables in innermost frame:
self: <block.RaidSet instance at 0xb7cbe94c>


Dispatcher instance, containing members:
intf:  InstallIngterface instance, containing members:
...
/tmp/syslog:
...
$a
/tmp/anaconda.log
...
starting dmraids
self.driveList(): ['sda','sdb']
DiskSet.skippedDisks: []
DiskSet.skippedDisks: []
starting all dmraids on drives ['sda', 'sdb']
scanning for dmraid on drives ['sda', 'sdb']
got raidset <block.RaidSet instance at 0xb7cbe94c> (sda sdb)
valid:  True found_devs: 0 total_devs: 0
adding mapper/isw_dgjddddaee it isys cache
adding sda to dmraid cache
removing sda from isys cache
adding sdb to dmraid cache
removing sdb from isys cache
starting raid <block.RaidSet instance at 0xb7cbe94c> with mknod=True
Comment 5 Jeremy Katz 2006-02-27 17:33:02 EST
This should be fixed in tomorrow's rawhide

Note You need to log in before you can comment on or make changes to this bug.