Bug 70786

Summary: Installer hangs when doing an upgrade
Product: [Retired] Red Hat Linux Reporter: Walter Stadler <walter.stadler>
Component: anacondaAssignee: Jeremy Katz <katzj>
Status: CLOSED DUPLICATE QA Contact: Brock Organ <borgan>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.2   
Target Milestone: ---   
Target Release: ---   
Hardware: i686   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2002-08-13 15:11:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Walter Stadler 2002-08-05 09:31:25 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.2.1) Gecko/20010901

Description of problem:
I am using a raid. The installer seems to have a problem with that

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Boot from RH CD
2. Choose "upgrade"
3.
	

Additional info:Traceback (innermost last):
  File "/usr/bin/anaconda", line 633, in ?
    intf.run(id, dispatch, configFileData)
  File "/usr/lib/anaconda/gui.py", line 353, in run
    self.icw.run (self.runres, configFileData)
  File "/usr/lib/anaconda/gui.py", line 814, in run
    mainloop ()
  File "/usr/lib/python1.5/site-packages/gtk.py", line 2676, in mainloop
    _gtk.gtk_main()
  File "/usr/lib/python1.5/site-packages/gtk.py", line 130, in __call__
    ret = apply(self.func, a)
  File "/usr/lib/anaconda/gui.py", line 417, in nextClicked
    self.dispatch.gotoNext()
  File "/usr/lib/anaconda/dispatch.py", line 144, in gotoNext
    self.moveStep()
  File "/usr/lib/anaconda/dispatch.py", line 209, in moveStep
    rc = apply(func, self.bindArgs(args))
  File "/usr/lib/anaconda/upgrade.py", line 34, in findRootParts
    parts = findExistingRoots(intf, id, chroot)
  File "/usr/lib/anaconda/upgrade.py", line 46, in findExistingRoots
    rootparts = diskset.findExistingRootPartitions(intf)
  File "/usr/lib/anaconda/partitioning.py", line 1293, in findExistingRootPartitions
    self.startAllRaid()
  File "/usr/lib/anaconda/partitioning.py", line 1255, in startAllRaid
    DiskSet.mdList.extend(raid.startAllRaid(driveList))
  File "/usr/lib/anaconda/raid.py", line 91, in startAllRaid
    mdList = scanForRaid(driveList)
  File "/usr/lib/anaconda/raid.py", line 43, in scanForRaid
    (major, minor, raidSet, level, nrDisks, totalDisks, mdMinor) =\
TypeError: unpack non-sequence

Local variables in innermost frame:
knownDisks: 3
minor: 90
disk: <PedDisk object at 851a8c8>
raidDevices: {5: 781309977, 4: 2088558795, 3: -219913429, 2: 227780540, 1:
-227378729, 0: -425393386}
knownLevel: 5
raidSets: {781309977: (1, 2, 5, ['hda1', 'hde1']), 227780540: (5, 3, 2, ['hda8',
'hde8']), -219913429: (5, 3, 3, ['hda6', 'hde6']), 2088558795: (5, 3, 4,
['hda2', 'hde2']), -227378729: (5, 3, 1, ['hda3', 'hde3']), -425393386: (5, 3,
0, ['hda5', 'hde5'])}
raidSet: 227780540
part: <PedPartition object at 8369ad8>
knownDevices: ['hda8', 'hde8']
nrDisks: 3
mdMinor: 2
drives: ['hda', 'hde', 'hdg']
raidParts: [<PedPartition object at 8369a30>, <PedPartition object at 8369a18>,
<PedPartition object at 8369a78>, <PedPartition object at 8369a90>,
<PedPartition object at 8369ac0>, <PedPartition object at 8369ad8>]
parts: ['hdg1', 'hdg2', 'hdg3', 'hdg5', 'hdg6', 'hdg8']
dev: hdg1
level: 5
totalDisks: 3
d: hdg
knownMinor: 2
major: 0

Comment 1 Michael Fulbright 2002-08-05 15:29:46 UTC
Do you have any incomplete raid devices on your drives? That is, any partitions
marked for use by linux RAID, but are not part of a current raid device?

Comment 2 Walter Stadler 2002-08-05 19:26:08 UTC
I think all raid devices are active:
/etc/raidtab:
raiddev             /dev/md4
raid-level                  5
nr-raid-disks               3
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda2
    raid-disk     0
    device          /dev/hde2
    raid-disk     1
    device          /dev/hdg6
    raid-disk     2
raiddev             /dev/md5
raid-level                  1
nr-raid-disks               2
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda1
    raid-disk     0
    device          /dev/hde1
    raid-disk     1
raiddev             /dev/md1
raid-level                  5
nr-raid-disks               3
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda3
    raid-disk     0
    device          /dev/hde3
    raid-disk     1
    device          /dev/hdg3
    raid-disk     2
raiddev             /dev/md0
raid-level                  5
nr-raid-disks               3
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda5
    raid-disk     0
    device          /dev/hde5
    raid-disk     1
    device          /dev/hdg2
    raid-disk     2
raiddev             /dev/md2
raid-level                  5
nr-raid-disks               3
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda8
    raid-disk     0
    device          /dev/hde8
    raid-disk     1
    device          /dev/hdg8
    raid-disk     2
raiddev             /dev/md3
raid-level                  5
nr-raid-disks               3
chunk-size                  64k
persistent-superblock       1
nr-spare-disks              0
    device          /dev/hda6
    raid-disk     0
    device          /dev/hde6
    raid-disk     1
    device          /dev/hdg5
    raid-disk     2
-----------------------------------------------
cat /proc/mdstat:
Personalities : [raid1] [raid5] 
read_ahead 1024 sectors
md2 : active raid5 hda8[0] hde8[1] hdg8[2]
      26644224 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
      
md4 : active raid5 hda2[0] hde2[1] hdg6[2]
      4095232 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
      
md3 : active raid5 hda6[0] hde6[1] hdg5[2]
      6136576 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
      
md1 : active raid5 hda3[0] hde3[1] hdg3[2]
      61432320 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
      
md0 : active raid5 hda5[0] hde5[1] hdg2[2]
      61432320 blocks level 5, 64k chunk, algorithm 0 [3/3] [UUU]
      
md5 : active raid1 hda1[0] hde1[1]
      30592 blocks [2/2] [UU]
      
unused devices: <none>


Comment 3 Jeremy Katz 2002-08-13 15:48:18 UTC
Can you try using the update disk available at
http://people.redhat.com/~katzj/73raid.img?  If you put that on a separate
floppy and boot with 'linux updates', inserting the disk when prompted, it
should fix the problem.

*** This bug has been marked as a duplicate of 64734 ***