Bug 78210 - automatic partitioning failure when installing over existing software raid volumes
Summary: automatic partitioning failure when installing over existing software raid vo...
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: installer
Version: 9
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jeremy Katz
QA Contact: Brock Organ
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2002-11-20 02:56 UTC by G Sandine
Modified: 2007-04-18 16:48 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2006-04-24 18:36:12 UTC
Embargoed:


Attachments (Terms of Use)
anaconda dump following /boot RAID1, sys RAID5 kickstart reinstall attempt (86.48 KB, text/plain)
2002-12-02 16:20 UTC, G Sandine
no flags Details

Description G Sandine 2002-11-20 02:56:01 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20021003

Description of problem:
Kickstart installing over an existing Red Hat 8.0 system fails with ``Automatic
Partitioning Errors'':
The mount point is invalid.  Mount points must start with `/' and cannot end
with `/', and must contain printable characters and no spaces.  Press `OK' to
reboot your system.

Appropriate lines of ks.cfg look like (might contain extra line breaks due to
size of html form window):

part raid.01 --onpart hda5
part raid.11 --onpart hde5
part raid.21 --onpart hdg5
part raid.02 --onpart hda6
part raid.12 --onpart hde6
part raid.22 --onpart hdg6
part raid.03 --onpart hda7
part raid.13 --onpart hde7
part raid.23 --onpart hdg7
part raid.04 --onpart hda8
part raid.14 --onpart hde8
part raid.24 --onpart hdg8
part raid.05 --onpart hda9
part raid.15 --onpart hde9
part raid.25 --onpart hdg9
part raid.06 --onpart hda10
part raid.16 --onpart hde10
part raid.26 --onpart hdg10
part raid.07 --onpart hda11
part raid.17 --onpart hde11
part raid.27 --onpart hdg11

raid /boot --level=1 --fstype=ext3 --spares=1 --device=md0 raid.01 raid.11 raid.21
raid / --level=0 --fstype=ext3 --device=md1 raid.02 raid.12 raid.22
raid /usr --level=0 --fstype=ext3 --device=md2 raid.03 raid.13 raid.23
raid /var --level=0 --fstype=ext3 --device=md3 raid.04 raid.14 raid.24
raid swap --level=0 --device=md4 raid.05 raid.15 raid.25
raid /opt --level=0 --fstype=ext3 --device=md5 raid.06 raid.16 raid.26
raid /home --level=0 --fstype=ext3 --device=md6 raid.07 raid.17 raid.27

The initial install works fine (using partitioned hard drives); attempting to
reinstall fails.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Partition hard drives as described in above ks.cfg
2. Do a successful kickstart install 
3. Attempt to reinstall with same settings
	

Actual Results:  Install fails.

Expected Results:  Install should not fail with indicated error.

Additional info:

One can repartition making changes (e.g. move hd{a,e,g}5 in by one cylinder),
and the installation will succeed.

Comment 1 G Sandine 2002-11-29 23:40:39 UTC
Kickstart re-installation fails identically by using software RAID5 for
non-/boot partitions (hda, hde, hdg [Promise controller]) and software RAID1 for
/boot.

Comment 2 G Sandine 2002-12-02 16:20:55 UTC
Created attachment 87046 [details]
anaconda dump following /boot RAID1, sys RAID5 kickstart reinstall attempt

Comment 3 G Sandine 2002-12-02 16:25:22 UTC
Attachment (id=87046) is anaconda dump attempting reinstall per the following
portion of ks.cfg:

part raid.01 --onpart=hde1
part raid.02 --onpart=hdg1
part raid.03 --onpart=hda1
part raid.11 --onpart=hde5
part raid.12 --onpart=hdg5
part raid.13 --onpart=hda5
part raid.21 --onpart=hde6
part raid.22 --onpart=hdg6
part raid.23 --onpart=hda6
part raid.31 --onpart=hde7
part raid.32 --onpart=hdg7
part raid.33 --onpart=hda7
part raid.41 --onpart=hde8
part raid.42 --onpart=hdg8
part raid.43 --onpart=hda8
part raid.51 --onpart=hde9
part raid.52 --onpart=hdg9
part raid.53 --onpart=hda9
part raid.61 --onpart=hde10
part raid.62 --onpart=hdg10
part raid.63 --onpart=hda10

raid /boot --fstype=ext2 --level=RAID1 --spares=1 --device=md0 raid.01 raid.02
raid.03
raid / --fstype=ext3 --level=RAID5 --device=md1 raid.11 raid.12 raid.13
raid /usr --fstype=ext3 --level=RAID5 --device=md2 raid.21 raid.22 raid.23
raid /var --fstype=ext3 --level=RAID5 --device=md3 raid.31 raid.32 raid.33
raid swap --level=RAID5 --device=md4 raid.41 raid.42 raid.43
raid /opt --fstype=ext3 --level=RAID5 --noformat --device=md5 raid.51 raid.52
raid.53
raid /home --fstype=ext3 --level=RAID5 --noformat --device=md6 raid.61 raid.62
raid.63

In the dump, I donṫ know where from came this:

, RAID Request -- mountpoint: None  uniqueID: 25
  type: ext2  format: 0  badblocks: None
  raidlevel: RAID1  raidspares: 1
  raidmembers: [1, 9, 17]


Comment 4 Need Real Name 2003-01-12 12:02:00 UTC
This problem does not exist in Red Hat 7.3 installer.  We have recently
kickstart installed in a software RAID5 partition scheme (except for /boot in a
small ext2 single partition), and reinstalled via kickstart (with some
"--noformat"s, for the purpose of a system restore) without incident.

Comment 5 Jeremy Katz 2003-02-12 02:26:46 UTC
This should be fixed in our current internal trees

Comment 6 G Sandine 2003-03-12 15:45:11 UTC
Is there a way one can avoid this problem with a currently available Red Hat 8.0
installer / anaconda?

Comment 7 G Sandine 2003-05-13 19:58:20 UTC
Red Hat 9 fails in exactly the same way with a software RAID5 setup (/boot on a
50 meg regular partition formatted ext2, rest of the system in software RAID5,
/usr, /var, /opt, and /home).  Here is the appropriate part of ks.cfg that
causes the Red Hat 9 installer to die in exactly the same way as Red Hat 8.0
installer did (sorry if bad line breaks in the long bottom lines):

part raid.a6 --onpart=hda6
part raid.e6 --onpart=hde6
part raid.g6 --onpart=hdg6
part raid.a7 --onpart=hda7
part raid.e7 --onpart=hde7
part raid.g7 --onpart=hdg7
part raid.a8 --onpart=hda8
part raid.e8 --onpart=hde8
part raid.g8 --onpart=hdg8
part raid.a10 --onpart=hda10
part raid.e10 --onpart=hde10
part raid.g10 --onpart=hdg10
part raid.a11 --onpart=hda11
part raid.e11 --onpart=hde11
part raid.g11 --onpart=hdg11
 
part /boot --fstype=ext2 --onpart=hda5
part swap --onpart=hde5
part swap --onpart=hdg5
part swap --onpart=hda9
part swap --onpart=hde9
part swap --onpart=hdg9
raid / --fstype=ext3 --level=RAID5 --spares=0 --device=md0 raid.a6 raid.e6 raid.g6
raid /usr --fstype=ext3 --level=RAID5 --spares=0 --device=md1 raid.a7 raid.e7
raid.g7
raid /var --fstype=ext3 --level=RAID5 --spares=0 --device=md2 raid.a8 raid.e8
raid.g8
raid /opt --fstype=ext3 --level=RAID5 --spares=0 --device=md3 raid.a10 raid.e10
raid.g10
raid /home --fstype=ext3 --level=RAID5 --spares=0 --device=md4 raid.a11 raid.e11
raid.g11


Comment 8 G Sandine 2003-05-13 20:12:31 UTC
I was wrong the failure is _not_ the same.  Here is the Python traceback:

Traceback (most recent call last):
  File "/usr/bin/anaconda", line 739, in ?
    intf.run(id, dispatch, configFileData)
  File "/usr/lib/anaconda/gui.py", line 670, in run
    self.icw.run (self.runres, configFileData)
  File "/usr/lib/anaconda/gui.py", line 1299, in run
    self.setup_window(runres)
  File "/usr/lib/anaconda/gui.py", line 1271, in setup_window
    self.setScreen ()
  File "/usr/lib/anaconda/gui.py", line 943, in setScreen
    (step, args) = self.dispatch.currentStep()
  File "/usr/lib/anaconda/dispatch.py", line 262, in currentStep
    self.gotoNext()
  File "/usr/lib/anaconda/dispatch.py", line 157, in gotoNext
    self.moveStep()
  File "/usr/lib/anaconda/dispatch.py", line 225, in moveStep
    rc = apply(func, self.bindArgs(args))
  File "/usr/lib/anaconda/autopart.py", line 1293, in doAutoPartition
    doPartitioning(diskset, partitions, doRefresh = 0)
  File "/usr/lib/anaconda/autopart.py", line 967, in doPartitioning
    (ret, msg) = processPartitioning(diskset, requests, newParts)
  File "/usr/lib/anaconda/autopart.py", line 938, in processPartitioning
    request.size = request.getActualSize(requests, diskset)
  File "/usr/lib/anaconda/partRequests.py", line 604, in getActualSize
    partsize = req.getActualSize(partitions, diskset)
AttributeError: 'NoneType' object has no attribute 'getActualSize'

and some more from the anaconda dump, directly below traceback:

Local variables in innermost frame:
member: 3
smallest: None
nummembers: 3
sum: 0
req: None
self: RAID Request -- mountpoint: None  uniqueID: 33
  type: ext3  format: 0  badblocks: None
  raidlevel: RAID5  raidspares: 0
  raidmembers: [3, 12, 21]
diskset: <partedUtils.DiskSet instance at 0x82c146c>
partitions: <partitions.Partitions instance at 0x8738514>


Comment 9 Tibor SEREG 2006-01-24 18:37:19 UTC
 I have the same (as #8) problem with FC3 (and 4). Here's the part-include
generated by a python script:

part raid.01 --onpart sda1
part raid.02 --onpart sda2
part /Images/Images1 --fstype=xfs --noformat --onpart sda3
part raid.11 --onpart sdb1
part raid.12 --onpart sdb2
part /Images/Images2 --fstype=xfs --noformat --onpart sdb3
raid /boot --level=1 --device=md0 --fstype=ext3 raid.01 raid.11
raid pv.01 --level=1 --device=md1 raid.02 raid.12
volgroup vg0 pv.01
logvol / --name=root --vgname=vg0 --fstype=ext3 --size=2000
logvol swap --name=swap --vgname=vg0 --fstype=swap --size=2000
logvol /bela --name=bela --vgname=vg0 --fstype=ext3 --size=2000
logvol /var/lib/mysql --name=db --vgname=vg0 --fstype=ext2 --size=4537

 There's not a single mirror-dev that generated with --onpart. (so the lvm makes
no effect - i have tried many other ways)


Comment 10 Jeremy Katz 2006-04-24 18:36:12 UTC
Mass-closing lots of old bugs which are in MODIFIED (and thus presumed to be
fixed).  If any of these are still a problem, please reopen or file a new bug
against the release which they're occurring in so they can be properly tracked.


Note You need to log in before you can comment on or make changes to this bug.