Bug 114950 - RAID5 JFS mount fails at boot time when using LABEL= in fstab
Summary: RAID5 JFS mount fails at boot time when using LABEL= in fstab
Alias: None
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: mount (Show other bugs)
(Show other bugs)
Version: 3.0
Hardware: All Linux
Target Milestone: ---
Assignee: Karel Zak
QA Contact:
Depends On:
TreeView+ depends on / blocked
Reported: 2004-02-04 19:23 UTC by William D. Hamblen
Modified: 2007-11-30 22:07 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2005-09-08 10:50:22 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
/proc/partitions after entering maintenance mode (2.66 KB, text/plain)
2004-02-24 18:31 UTC, William D. Hamblen
no flags Details

Description William D. Hamblen 2004-02-04 19:23:35 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4) Gecko/20030922

Description of problem:
I have an 8 disk software RAID5 JFS filesystem on a 3Ware 6800
controller.  It's labeled "/field2" and I have an entry in /etc/fstab
like this:

LABEL=/field2  /field2  jfs  defaults  1 2

"mount /field2" works properly when the system has fully booted or
from maintenance mode after a failed boot.  But with this setup, the
normal boot process fails (and drops me to maintenance mode) because
no LABEL=/field2 is found.  Replacing the LABEL= directive in fstab
with "/dev/md1" makes everything work.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create software RAID5 JFS filesystem
2. Give it a LABEL in the superblock and in fstab
3. Reboot

Actual Results:  Boot process fails with LABEL not found and drops you
to maintenance mode.

Expected Results:  Normal bootup.

Additional info:

Comment 1 Elliot Lee 2004-02-24 15:50:03 UTC
When you get dropped to maintainance mode, what do /proc/partitions
and /proc/mdstat look like? What modules are loaded?

Comment 2 William D. Hamblen 2004-02-24 18:31:47 UTC
Created attachment 98002 [details]
/proc/partitions after entering maintenance mode

Comment 3 William D. Hamblen 2004-02-24 18:49:58 UTC
To clarify my original submission a bit, the last successful step of
the boot process is
Finding module dependencies:  [OK]
Checking filesystems
Couldn't find matching filesystem: LABEL=/field2
   *** An error occurred during filesystem check
          (Repair filesystem) 1 #
/proc/mdstat looks like this:

Personalities : [raid5]
read_ahead 1024 sectors
Event: 1
md1 : active raid5 sdp1[7] sdo1[6] sdn1[5] sdm1[4] sdl1[3] sdk1[2]
sdj1[1] sdi1[0]                                                      
                               684962432 blocks level 5, 128k chunk,
algorithm 2 [8/8] [UUUUUUUU]
      [>....................]  resync =  0.9% (910548/97851776)
finish=470.5min speed=3430K/sec
unused devices: <none>

/proc/partitions is a little big so I've attached it.  Note there are
eight additional drives that will be md0 if I can come with a set of 8
that don't have bad blocks.  hda is the system disk (not on a 3Ware
jfs_tune shows that the label on md1 is '    /field2'.  That's what
when I set it with 'jfs_tune -L "/field2" /dev/md1'

As to the modules, the jfs module is *not* loaded immediately after
entering maintenance mode.  But it will autoload if I do a "mount
/field2".  Here's the before list:

Module                  Size  Used by    Not tainted
keybdev                 2912   0  (unused)
mousedev                5428   0  (unused)
hid                    21892   0  (unused)
input                   5824   0  [keybdev mousedev hid]
usb-uhci               25996   0  (unused)
usbcore                78688   1  [hid usb-uhci]
ext3                   87016   1
jbd                    52080   1  [ext3]
raid5                  18600   1
xor                    12688   0  [raid5]
3w-xxxx                38976   8
sd_mod                 13712  16
scsi_mod              108392   2  [3w-xxxx sd_mod]

Comment 4 Ram 2004-08-05 01:14:31 UTC
I have installed AS 2.1 Update 3 in DELL PE 2650. I have 2 Logical 
drives presented to OS /dev/sda & /dev/sdb

I install the OS through a kickstart file

/u01 -> /dev/sda
/u02 -> /dev/sdb2
/u03 -> /dev/sdb1
/u04 -> /dev/sdb3 etc...

Everything installs OK, when I try to boot, it comes with this error 

mount:mount: Special device LABEL=/u02 does not exist [FAILED]

It comes most of the time, intermediately (say for around 8-10 
reboots the 
error message does come only 2 times it does not come!!! . TO add to 
that, it sometimes come with u04 partition as well).

/etc/fstab has 
LABEL=/u02 /dev/sdb2 (all looks OK)
LABEL=/u04 /dev/sdb3

I saved a couple of files in the partition @ u02, u04 etc... after 
reboot's the files are intact to me.

I tried these:
#umount /u02
#e2fsck /dev/sdb2
(i.e. u02)
#mount /u02

#e2label /dev/sdb2
(output is /u02)

Comment 5 Elliot Lee 2004-12-02 20:41:28 UTC
Does this bug look similar to #97973?

Comment 6 Karel Zak 2005-09-08 10:50:22 UTC
Since there are insufficient details provided in this report for us to
investigate the issue further, and we have not received the feedback we
requested, we will assume the problem was not reproduceable or has been fixed in
a later update for this product.

Users who have experienced this problem are encouraged to upgrade to the latest
update release, and if this issue is still reproduceable, please contact the Red
Hat Global Support Services page on our website for technical support options:

If you have a telephone based support contract, you may contact Red Hat at
1-888-GO-REDHAT for technical support for the problem you are experiencing. 

Note You need to log in before you can comment on or make changes to this bug.