Bug 97973 - mounting partitions on ATA disks using disk labels automagic fails (N.B.: weird hardware)
mounting partitions on ATA disks using disk labels automagic fails (N.B.: wei...
Status: CLOSED CANTFIX
Product: Red Hat Linux
Classification: Retired
Component: mount (Show other bugs)
8.0
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Karel Zak
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2003-06-24 15:02 EDT by David Tonhofer
Modified: 2007-04-18 12:55 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-09-08 06:50:53 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description David Tonhofer 2003-06-24 15:02:04 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.2)
Gecko/20030208 Netscape/7.02

Description of problem:

a) Let's begin with the hardware:

   A Fujitsu-Siemens Primergy L100 rackable server machine w/o a
   preinstalled operating system (hereafter called �server�).

   An on-board Promise PDC20265R ASIC, allegedly corresponding to a
   Promise FastTrack 100 Lite PCI add-on card. This is a BIOS-based
   'software' raid. THE RAID HAS BEEN DISABLED using a jumper, so the
   disks are visible as NORMAL ATA disks.

   Two identical Seagate ST340810A ATA harddisks of 40 GB �marketing
   capacity�, controlled by said ASIC (hereafter called �harddisks�)

b) Each disk has its own IDE bus, and the first one is visible
   as /dev/hde, the second as /dev/hdg - which might or might not
   be the primary cause of the problem. The CD-ROM is /dev/hdc on
   yet another IDE bus.

c) Extra problem,maynot be relevant: When installing Linux, and
   after partitioning with fdisk, you get the following message:

   �re-reading the partition table failed because device or resource
    busy � the kernel still uses the old partition table, the new one
    will be used at next reboot.�

   Rebooting after that step and then proceeding with installation 
   seems to work ok,though.

d) What works, what does not: 

   -> Installing Linux RH 8.0 with Promise RAID disabled, then setting
      up the disks as a Linux software RAID (using 'md') in mirror
      mode: A-OK! 
      THIS SYSTEM BOOTS LIKE A CHARM

   -> Installing Linux RH 8.0 on the first disk, w/o any mirroring 
      whatsoever (neither Promise nor MD)
      THIS SYSTEM CANNOT BOOT, because the the 'mount' orders in
      /etc/rc.sysinit FAIL.

e) Where's the real problem?

   In /etc/rc.sysinit, remounting the root filesystem read-write
   fails:

   mount -n -o remount,rw /

   "Remounting in r-w mode: no such partition found"

   After that, boottime things go haywire, of course. 

   A few lines later, local filesystems are mounted. If the first
   problem is fixed as described below (see below), the command:

   mount -a -t nonfs,smbfs,ncpfs -O no_netdev

   gives the following errors:

   special device LABEL=/exp1 does not exist
   special device LABEL=/home does not exist
   special device LABEL=/tmp does not exist
   special device LABEL=/usr does not exist
   special device LABEL=/var does not exist

   However, the labels are ok. And it's only these four partitions
   it complains about. Only /dev/hde1, /dev/hde2, /dev/hde13 work,
   not the rest (I have partitions 1,2,5,6,7,8,9,10,11,12,13)

   Looking at the 'mount' source, the error message means that the
   kernel gave an ENOENT error, 'pathname empty or has a non-existent
   component'. 

f) Tried whether there was a problem with the ext3 filesystem.
     Same trouble with system in ext2 or ext3
   Tried whether there was a problem with the labels (using 'e2label'
   and 'vi /etc/fstab'):
     Same trouble with labels like '/' or 'ROOT' 
                                   '/usr' or 'USR' etc...

g) Workaround:

   Mounting the root filesystem not through label but through
   device name works, i.e. edit rc.sysinit and write:

   mount -n -o remount,rw /dev/hde2 /

   and instead of mounting the other filesystems using mount -a, do
   it explicitely:

   mount /dev/hde1 /boot
   mount /dev/hde2 /
   mount /dev/hde5 /tmp
   mount /dev/hde6 /home
   mount /dev/hde7 /usr
   mount /dev/hde8 /var
   mount /dev/hde9 /var/log
   mount /dev/hde10 /var/spool
   mount /dev/hde11 /var/db
   mount /dev/hde12 /exp1
   mount /dev/hde13 /exp2

h) Notabene: The usual parameters  to the kernel about IDE drives,
   used when the Promise RAID is in use, i.e.:

   ide0=0x1f0,0x3f6,14 ide1=0x170,0x376,15 ide2=0 \
   ide3=0 ide4=0 ide5=0 ide6=0 ide7=0 ide8=0 ide9=0
   
   must not be given when the RAID is not in use. Otherwise
   the disks won't be seen.



Version-Release number of selected component (if applicable):
mount-2.11r-10

How reproducible:
Always

Steps to Reproduce:
Do not modify the standard setup that was put on your disk.
Reboot.

Actual Results:  Root partition is not remounted read-write.

Expected Results:  Root partition shoudl be remounted read-write.

Additional info:

Workaround & additional info given in text.
Comment 2 Elliot Lee 2003-07-22 12:29:38 EDT
How about in RHL9?
Comment 3 David Tonhofer 2003-07-22 12:33:18 EDT
With some luck, I will be able to test in RH9.0 within the next 2 weeks.
Comment 4 David Tonhofer 2003-07-25 09:47:05 EDT
I tried with RedHat Linux 9.0 with ext3 partitions, I get the same error,
i.e. installation on the first IDE disk works, then the boot of the
installed system fails when mount tries to remount root r-w:

"mount: no such partition found"

(Incidentally what did you people do with fdisk? Remove it from the installation
process?)

I will try to set up RH on a software-mirrored-two-disk-RAID later. I expect
this to work. 
Comment 5 Need Real Name 2003-10-27 16:57:43 EST
Same problem here with RH9.
We avoid it editing the /etc/lilo.conf and /etc/fstab.
Our disks here appears as hde and hdg, we could use the FastTrak module (it is
painful) and /dev/sda apperas, then we change lilo.conf and fstab file to use
sda instead other hard drives.

In the case of fstab we must avoid the use of labels since the box became crazy
at boot mounting the filesystem in read-only mode due to confusion.

Our fstab is:
lnxsrv01 ~ # cat /etc/fstab
/dev/sda3                 /                       ext3    defaults        1 1
/dev/sda6           /backup                 ext3    defaults        1 2
/dev/sda1             /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
/dev/sda9             /home                   ext3    defaults        1 2
none                    /proc                   proc    defaults        0 0
none                    /dev/shm                tmpfs   defaults        0 0
/dev/sda7              /tmp                    ext3    defaults        1 2
/dev/sda2         /usuarios               ext3    defaults        1 2
/dev/sda8          /var/log                ext3    defaults        1 2
/dev/sda5        /var/spool              ext3    defaults        1 2
/dev/sda10              swap                    swap    defaults        0 0
/dev/cdrom              /mnt/cdrom              udf,iso9660
noauto,owner,kudzu,ro 0 0
/dev/fd0                /mnt/floppy             auto    noauto,owner,kudzu 0 0

We are not sure if this config will offer problems in the future upgrades of the
 system :P

I hope this help.
Comment 6 Elliot Lee 2004-08-20 14:14:54 EDT
Would it be possible to dump the contents of /proc/partitions
immediately after the error is received? I'd like to find out what
partitions the system thinks it has - it's possible that the right
modules are not getting loaded at boot, causing the
find-partition-by-label bit to fail.
Comment 7 David Tonhofer 2004-10-03 09:08:37 EDT
Some update: The server is currently in production but might be
liberated before end-of-year so I can play around on it. If there is
any news, 'I will be back'.
Comment 8 Karel Zak 2005-09-08 06:50:53 EDT
Since there are insufficient details provided in this report for us to
investigate the issue further, and we have not received the feedback we
requested, we will assume the problem was not reproduceable or has been fixed in
a later update for this product.

Users who have experienced this problem are encouraged to upgrade to the latest
update release, and if this issue is still reproduceable, please contact the Red
Hat Global Support Services page on our website for technical support options:
https://www.redhat.com/support

If you have a telephone based support contract, you may contact Red Hat at
1-888-GO-REDHAT for technical support for the problem you are experiencing. 
Comment 9 David Tonhofer 2005-12-30 08:54:58 EST
Being the original reporter, I would like to nail this coffin shut by saying
'don't trust the L100 hardware'. The server is now running RH ES 4.0 on a
software-mirrored RAID but there was a lot of installation trouble due to
failing disk accesses, which can be summarized thusly:

  ************
  The L100 cannot be rebooted. It must be cold started otherwise the first
  disk (hde) will behave in a very very weird way.
  ************

More in bug #174306


Note You need to log in before you can comment on or make changes to this bug.