Red Hat Bugzilla – Bug 114950
RAID5 JFS mount fails at boot time when using LABEL= in fstab
Last modified: 2007-11-30 17:07:00 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4) Gecko/20030922
Description of problem:
I have an 8 disk software RAID5 JFS filesystem on a 3Ware 6800
controller. It's labeled "/field2" and I have an entry in /etc/fstab
LABEL=/field2 /field2 jfs defaults 1 2
"mount /field2" works properly when the system has fully booted or
from maintenance mode after a failed boot. But with this setup, the
normal boot process fails (and drops me to maintenance mode) because
no LABEL=/field2 is found. Replacing the LABEL= directive in fstab
with "/dev/md1" makes everything work.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create software RAID5 JFS filesystem
2. Give it a LABEL in the superblock and in fstab
Actual Results: Boot process fails with LABEL not found and drops you
to maintenance mode.
Expected Results: Normal bootup.
When you get dropped to maintainance mode, what do /proc/partitions
and /proc/mdstat look like? What modules are loaded?
Created attachment 98002 [details]
/proc/partitions after entering maintenance mode
To clarify my original submission a bit, the last successful step of
the boot process is
Finding module dependencies: [OK]
Couldn't find matching filesystem: LABEL=/field2
*** An error occurred during filesystem check
(Repair filesystem) 1 #
/proc/mdstat looks like this:
Personalities : [raid5]
read_ahead 1024 sectors
md1 : active raid5 sdp1 sdo1 sdn1 sdm1 sdl1 sdk1
684962432 blocks level 5, 128k chunk,
algorithm 2 [8/8] [UUUUUUUU]
[>....................] resync = 0.9% (910548/97851776)
unused devices: <none>
/proc/partitions is a little big so I've attached it. Note there are
eight additional drives that will be md0 if I can come with a set of 8
that don't have bad blocks. hda is the system disk (not on a 3Ware
jfs_tune shows that the label on md1 is ' /field2'. That's what
when I set it with 'jfs_tune -L "/field2" /dev/md1'
As to the modules, the jfs module is *not* loaded immediately after
entering maintenance mode. But it will autoload if I do a "mount
/field2". Here's the before list:
Module Size Used by Not tainted
keybdev 2912 0 (unused)
mousedev 5428 0 (unused)
hid 21892 0 (unused)
input 5824 0 [keybdev mousedev hid]
usb-uhci 25996 0 (unused)
usbcore 78688 1 [hid usb-uhci]
ext3 87016 1
jbd 52080 1 [ext3]
raid5 18600 1
xor 12688 0 [raid5]
3w-xxxx 38976 8
sd_mod 13712 16
scsi_mod 108392 2 [3w-xxxx sd_mod]
I have installed AS 2.1 Update 3 in DELL PE 2650. I have 2 Logical
drives presented to OS /dev/sda & /dev/sdb
I install the OS through a kickstart file
/u01 -> /dev/sda
/u02 -> /dev/sdb2
/u03 -> /dev/sdb1
/u04 -> /dev/sdb3 etc...
Everything installs OK, when I try to boot, it comes with this error
mount:mount: Special device LABEL=/u02 does not exist [FAILED]
It comes most of the time, intermediately (say for around 8-10
error message does come only 2 times it does not come!!! . TO add to
that, it sometimes come with u04 partition as well).
LABEL=/u02 /dev/sdb2 (all looks OK)
I saved a couple of files in the partition @ u02, u04 etc... after
reboot's the files are intact to me.
I tried these:
(output is /u02)
Does this bug look similar to #97973?
Since there are insufficient details provided in this report for us to
investigate the issue further, and we have not received the feedback we
requested, we will assume the problem was not reproduceable or has been fixed in
a later update for this product.
Users who have experienced this problem are encouraged to upgrade to the latest
update release, and if this issue is still reproduceable, please contact the Red
Hat Global Support Services page on our website for technical support options:
If you have a telephone based support contract, you may contact Red Hat at
1-888-GO-REDHAT for technical support for the problem you are experiencing.