Bug 231430 - Disklabel randomness
Disklabel randomness
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
x86_64 Linux
medium Severity low
: ---
: ---
Assigned To: Chris Lumens
Depends On:
  Show dependency treegraph
Reported: 2007-03-08 00:41 EST by Matt
Modified: 2009-10-21 11:40 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-08-16 13:23:44 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Matt 2007-03-08 00:41:37 EST
Description of problem:

In the kickstart script I use the commands                                      
clearpart --all --initlabel                                                     
zerombr yes                                                                     
And yet, in a seeming random way, I get 1's added to some of the disk           
labels, so that /etc/fstab on two different machines may look :                 
LABEL=/1        /       ext3    defaults        0 0                             
LABEL=/boot1    /boot   ext3    defaults        0 0                             
LABEL=/         /       ext3    defaults        0 0                             
LABEL=/boot     /boot   ext3    defaults        0 0                             
Despite being built from the same KS script.                                    

Version-Release number of selected component (if applicable):

Seen under FC5 and FC6

How reproducible:

2 in 4 machines built with the same KS script.

Steps to Reproduce:
1. Write a kickstart script.
2. Build 4 - 6 machines.
3. Check fstab files.
Actual results:

Some machines have 1's added to their disklabels. Some do not.

Expected results:

Consistant disklabels across all machines built with the same script.

Additional info:

Of course this is not a big issue. The machine still works. However, if I want
to add/remove/change mount points (mostly NFS) over all of the machines I built,
then I can't just over write with the one fstab file, as some machines will fail.

Work around : 

Write an fstab that uses device names rather than disk labels.

Why it still needs to be fixed :

Inconsistency is something I expect on the Windows platform.
Comment 1 Chris Lumens 2007-03-12 14:51:24 EDT
Do some of these machines have a preexisting installation of some sort of Linux?
Comment 2 Matt 2007-03-12 18:43:51 EDT
No, these are generally brand new machines that have only ever had my kickstart
script run on them.

It may have been run on them 2 or 3 times to iron out bugs in the build process,
but it's run on every machine 2 or 3 times. 
Comment 3 Matt 2007-04-02 02:55:55 EDT
Does clearpart do anything?
Today, after doing a clearpart --drives=sda,sdb,sdc,sdd,sde,sdf,sdg,sdh,sdi
--all and then trying to build a RAID 0 array across drives sda - sdh, it told
me that it could not create the array, because sda already had an ext3
filesystem on it.

clearpart --drives=sda,sdb,sdc,sdd,sde,sdf,sdg,sdh,sdi --all
zerombr yes

part /boot --size=256 --fstype="ext3" --ondisk=sdi --asprimary
part swap --size=8192 --fstype="swap" --ondisk=sdi --asprimary
part / --size=1 --fstype="ext3" --ondisk=sdi --asprimary --grow

part raid.01 --size=1 --grow --ondisk=sda
part raid.02 --size=1 --grow --ondisk=sdb
part raid.03 --size=1 --grow --ondisk=sdc
part raid.04 --size=1 --grow --ondisk=sdd
part raid.05 --size=1 --grow --ondisk=sde
part raid.06 --size=1 --grow --ondisk=sdf
part raid.07 --size=1 --grow --ondisk=sdg
part raid.08 --size=1 --grow --ondisk=sdh
raid /tmp --level=0 --device=md0 raid.01 raid.02 raid.03 raid.04 raid.05 raid.06
raid.07 raid.08

I also had it complain that md0 already existed in the mdList.

In the end I pulled out all the RAID setup, and shifted it into %post
using parted to do the work.

System has 1 SATA disk and 8 SAS disks. Really unimpressed that during install
process, SATA modules were installed second after SAS drivers, and so SATA OS
disk listed as sdi, and yet on reboot after install, SATA modules were loaded
first, so SATA disk listed as sda.

Consistancy! Please!
Comment 5 Chris Lumens 2007-05-07 14:27:55 EDT
I've just done several installs in a row and have not been able to reproduce
this problem, though I do remember seeing it in the past.  Are you still able to
reproduce it as well?

Also, the "md0 is already in the mdList" junk has been straightened out, so you
shouldn't be seeing that any more.
Comment 6 Phil Oester 2007-06-13 11:39:53 EDT
See also bug #242081 -- this is happening in FC7 also.
Comment 7 Chris Lumens 2007-08-16 13:23:44 EDT
Closing based on comment #6 in bug 242081.
Comment 8 Allen May 2009-10-21 11:40:26 EDT
In addition to 242081, issue #s 231430, 163921, and 209291 all report the same thing.  It looks like it was fixed and snuck back in to RH5.1 thru 5.3.

We're getting it almost every time now even when putting in a dd to wipe out the boot record in the %pre section.

Note You need to log in before you can comment on or make changes to this bug.