From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.2)
Description of problem:
a) Let's begin with the hardware:
A Fujitsu-Siemens Primergy L100 rackable server machine w/o a
preinstalled operating system (hereafter called ï¿½serverï¿½).
An on-board Promise PDC20265R ASIC, allegedly corresponding to a
Promise FastTrack 100 Lite PCI add-on card. This is a BIOS-based
'software' raid. THE RAID HAS BEEN DISABLED using a jumper, so the
disks are visible as NORMAL ATA disks.
Two identical Seagate ST340810A ATA harddisks of 40 GB ï¿½marketing
capacityï¿½, controlled by said ASIC (hereafter called ï¿½harddisksï¿½)
b) Each disk has its own IDE bus, and the first one is visible
as /dev/hde, the second as /dev/hdg - which might or might not
be the primary cause of the problem. The CD-ROM is /dev/hdc on
yet another IDE bus.
c) Extra problem,maynot be relevant: When installing Linux, and
after partitioning with fdisk, you get the following message:
ï¿½re-reading the partition table failed because device or resource
busy ï¿½ the kernel still uses the old partition table, the new one
will be used at next reboot.ï¿½
Rebooting after that step and then proceeding with installation
seems to work ok,though.
d) What works, what does not:
-> Installing Linux RH 8.0 with Promise RAID disabled, then setting
up the disks as a Linux software RAID (using 'md') in mirror
THIS SYSTEM BOOTS LIKE A CHARM
-> Installing Linux RH 8.0 on the first disk, w/o any mirroring
whatsoever (neither Promise nor MD)
THIS SYSTEM CANNOT BOOT, because the the 'mount' orders in
e) Where's the real problem?
In /etc/rc.sysinit, remounting the root filesystem read-write
mount -n -o remount,rw /
"Remounting in r-w mode: no such partition found"
After that, boottime things go haywire, of course.
A few lines later, local filesystems are mounted. If the first
problem is fixed as described below (see below), the command:
mount -a -t nonfs,smbfs,ncpfs -O no_netdev
gives the following errors:
special device LABEL=/exp1 does not exist
special device LABEL=/home does not exist
special device LABEL=/tmp does not exist
special device LABEL=/usr does not exist
special device LABEL=/var does not exist
However, the labels are ok. And it's only these four partitions
it complains about. Only /dev/hde1, /dev/hde2, /dev/hde13 work,
not the rest (I have partitions 1,2,5,6,7,8,9,10,11,12,13)
Looking at the 'mount' source, the error message means that the
kernel gave an ENOENT error, 'pathname empty or has a non-existent
f) Tried whether there was a problem with the ext3 filesystem.
Same trouble with system in ext2 or ext3
Tried whether there was a problem with the labels (using 'e2label'
and 'vi /etc/fstab'):
Same trouble with labels like '/' or 'ROOT'
'/usr' or 'USR' etc...
Mounting the root filesystem not through label but through
device name works, i.e. edit rc.sysinit and write:
mount -n -o remount,rw /dev/hde2 /
and instead of mounting the other filesystems using mount -a, do
mount /dev/hde1 /boot
mount /dev/hde2 /
mount /dev/hde5 /tmp
mount /dev/hde6 /home
mount /dev/hde7 /usr
mount /dev/hde8 /var
mount /dev/hde9 /var/log
mount /dev/hde10 /var/spool
mount /dev/hde11 /var/db
mount /dev/hde12 /exp1
mount /dev/hde13 /exp2
h) Notabene: The usual parameters to the kernel about IDE drives,
used when the Promise RAID is in use, i.e.:
ide0=0x1f0,0x3f6,14 ide1=0x170,0x376,15 ide2=0 \
ide3=0 ide4=0 ide5=0 ide6=0 ide7=0 ide8=0 ide9=0
must not be given when the RAID is not in use. Otherwise
the disks won't be seen.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Do not modify the standard setup that was put on your disk.
Actual Results: Root partition is not remounted read-write.
Expected Results: Root partition shoudl be remounted read-write.
Workaround & additional info given in text.
How about in RHL9?
With some luck, I will be able to test in RH9.0 within the next 2 weeks.
I tried with RedHat Linux 9.0 with ext3 partitions, I get the same error,
i.e. installation on the first IDE disk works, then the boot of the
installed system fails when mount tries to remount root r-w:
"mount: no such partition found"
(Incidentally what did you people do with fdisk? Remove it from the installation
I will try to set up RH on a software-mirrored-two-disk-RAID later. I expect
this to work.
Same problem here with RH9.
We avoid it editing the /etc/lilo.conf and /etc/fstab.
Our disks here appears as hde and hdg, we could use the FastTrak module (it is
painful) and /dev/sda apperas, then we change lilo.conf and fstab file to use
sda instead other hard drives.
In the case of fstab we must avoid the use of labels since the box became crazy
at boot mounting the filesystem in read-only mode due to confusion.
Our fstab is:
lnxsrv01 ~ # cat /etc/fstab
/dev/sda3 / ext3 defaults 1 1
/dev/sda6 /backup ext3 defaults 1 2
/dev/sda1 /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
/dev/sda9 /home ext3 defaults 1 2
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/sda7 /tmp ext3 defaults 1 2
/dev/sda2 /usuarios ext3 defaults 1 2
/dev/sda8 /var/log ext3 defaults 1 2
/dev/sda5 /var/spool ext3 defaults 1 2
/dev/sda10 swap swap defaults 0 0
/dev/cdrom /mnt/cdrom udf,iso9660
noauto,owner,kudzu,ro 0 0
/dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0
We are not sure if this config will offer problems in the future upgrades of the
I hope this help.
Would it be possible to dump the contents of /proc/partitions
immediately after the error is received? I'd like to find out what
partitions the system thinks it has - it's possible that the right
modules are not getting loaded at boot, causing the
find-partition-by-label bit to fail.
Some update: The server is currently in production but might be
liberated before end-of-year so I can play around on it. If there is
any news, 'I will be back'.
Since there are insufficient details provided in this report for us to
investigate the issue further, and we have not received the feedback we
requested, we will assume the problem was not reproduceable or has been fixed in
a later update for this product.
Users who have experienced this problem are encouraged to upgrade to the latest
update release, and if this issue is still reproduceable, please contact the Red
Hat Global Support Services page on our website for technical support options:
If you have a telephone based support contract, you may contact Red Hat at
1-888-GO-REDHAT for technical support for the problem you are experiencing.
Being the original reporter, I would like to nail this coffin shut by saying
'don't trust the L100 hardware'. The server is now running RH ES 4.0 on a
software-mirrored RAID but there was a lot of installation trouble due to
failing disk accesses, which can be summarized thusly:
The L100 cannot be rebooted. It must be cold started otherwise the first
disk (hde) will behave in a very very weird way.
More in bug #174306