Description of problem:
In the kickstart script I use the commands
clearpart --all --initlabel
And yet, in a seeming random way, I get 1's added to some of the disk
labels, so that /etc/fstab on two different machines may look :
LABEL=/1 / ext3 defaults 0 0
LABEL=/boot1 /boot ext3 defaults 0 0
LABEL=/ / ext3 defaults 0 0
LABEL=/boot /boot ext3 defaults 0 0
Despite being built from the same KS script.
Version-Release number of selected component (if applicable):
Seen under FC5 and FC6
2 in 4 machines built with the same KS script.
Steps to Reproduce:
1. Write a kickstart script.
2. Build 4 - 6 machines.
3. Check fstab files.
Some machines have 1's added to their disklabels. Some do not.
Consistant disklabels across all machines built with the same script.
Of course this is not a big issue. The machine still works. However, if I want
to add/remove/change mount points (mostly NFS) over all of the machines I built,
then I can't just over write with the one fstab file, as some machines will fail.
Work around :
Write an fstab that uses device names rather than disk labels.
Why it still needs to be fixed :
Inconsistency is something I expect on the Windows platform.
Do some of these machines have a preexisting installation of some sort of Linux?
No, these are generally brand new machines that have only ever had my kickstart
script run on them.
It may have been run on them 2 or 3 times to iron out bugs in the build process,
but it's run on every machine 2 or 3 times.
Does clearpart do anything?
Today, after doing a clearpart --drives=sda,sdb,sdc,sdd,sde,sdf,sdg,sdh,sdi
--all and then trying to build a RAID 0 array across drives sda - sdh, it told
me that it could not create the array, because sda already had an ext3
filesystem on it.
clearpart --drives=sda,sdb,sdc,sdd,sde,sdf,sdg,sdh,sdi --all
part /boot --size=256 --fstype="ext3" --ondisk=sdi --asprimary
part swap --size=8192 --fstype="swap" --ondisk=sdi --asprimary
part / --size=1 --fstype="ext3" --ondisk=sdi --asprimary --grow
part raid.01 --size=1 --grow --ondisk=sda
part raid.02 --size=1 --grow --ondisk=sdb
part raid.03 --size=1 --grow --ondisk=sdc
part raid.04 --size=1 --grow --ondisk=sdd
part raid.05 --size=1 --grow --ondisk=sde
part raid.06 --size=1 --grow --ondisk=sdf
part raid.07 --size=1 --grow --ondisk=sdg
part raid.08 --size=1 --grow --ondisk=sdh
raid /tmp --level=0 --device=md0 raid.01 raid.02 raid.03 raid.04 raid.05 raid.06
I also had it complain that md0 already existed in the mdList.
In the end I pulled out all the RAID setup, and shifted it into %post
using parted to do the work.
System has 1 SATA disk and 8 SAS disks. Really unimpressed that during install
process, SATA modules were installed second after SAS drivers, and so SATA OS
disk listed as sdi, and yet on reboot after install, SATA modules were loaded
first, so SATA disk listed as sda.
I've just done several installs in a row and have not been able to reproduce
this problem, though I do remember seeing it in the past. Are you still able to
reproduce it as well?
Also, the "md0 is already in the mdList" junk has been straightened out, so you
shouldn't be seeing that any more.
See also bug #242081 -- this is happening in FC7 also.
Closing based on comment #6 in bug 242081.
In addition to 242081, issue #s 231430, 163921, and 209291 all report the same thing. It looks like it was fixed and snuck back in to RH5.1 thru 5.3.
We're getting it almost every time now even when putting in a dd to wipe out the boot record in the %pre section.