Bug 235465 - lock_nolock results in /sbin/mount.gfs: error 19 mounting
lock_nolock results in /sbin/mount.gfs: error 19 mounting
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs-utils (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Robert Peterson
Dean Jansa
Depends On:
  Show dependency treegraph
Reported: 2007-04-05 18:17 EDT by Axel Thimm
Modified: 2010-01-11 22:32 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-06-19 11:57:26 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Axel Thimm 2007-04-05 18:17:31 EDT
Description of problem:
Trying to locally mount a gfs filesystem results in

# mount -o lockproto=lock_nolock /dev/mapper/test-data /mnt
/sbin/mount.gfs: error 19 mounting /dev/mapper/test-data on /mnt

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.gfs_mkfs -p lock_dlm -t test:data -j4  /dev/mapper/test-data 
2.mount -o lockproto=lock_nolock /dev/mapper/test-data /mnt
Actual results:
/sbin/mount.gfs: error 19 mounting /dev/mapper/test-data on /mnt

Expected results:
Should locally mount the filesystem

Additional info:
Comment 1 Nate Straz 2007-04-13 14:25:56 EDT
Was the lock_nolock module loaded when you tried to mount?
Comment 2 Axel Thimm 2007-04-13 15:32:21 EDT
Yes, I checked that it was loaded.

I also continued the cluster setup and found out I was using GFS1. I must had
used gfs_mkfs instead of mkfs.gfs2. In fcat the above error also hints to GFS1
instead of gfs2.

I nuked the setup, recreated volumes and gfs2 filesystems on a proper cluster
and that worked fine. If I umount these filesystems and mount them back with 
lock_nolock it works. So it may be just GFS1 that doesn't mount with lock_nolock.

I'm moving it therefore to gfs-utils, where it belonged. I have no intention of
using GFS1 filesystems, so I can't do further testing on GFS1. Should it pop up
again in a gfs2 context I'll revisit this bug.
Comment 4 Robert Peterson 2007-05-10 11:18:20 EDT
This doesn't happen for me when using code built from the latest cvs 
tree.  I'll see if I can isolate which code fix made it work and make 
sure it got into the stuff for 5.1.
Comment 5 Robert Peterson 2007-06-19 11:57:26 EDT
I cannot recreate this problem at any fix level of RHEL5.  I scratch
built a clean RHEL5 system and performed the following steps without 

[root@tank-04 ~]# modprobe gfs
[root@tank-04 ~]# modprobe gfs2
[root@tank-04 ~]# modprobe lock_nolock
[root@tank-04 ~]# uname -a
Linux tank-04 2.6.18-8.el5 #1 SMP Fri Jan 26 14:15:21 EST 2007 i686 i686 i386
[root@tank-04 ~]# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created
[root@tank-04 ~]# vgcreate bob_vg /dev/sda
  Volume group "bob_vg" successfully created
[root@tank-04 ~]# lvcreate -L 50G bob_vg -n bobs_lv
  Logical volume "bobs_lv" created
[root@tank-04 ~]# gfs_mkfs -p lock_dlm -t test:data -j4  /dev/mapper/bob_vg-bobs_lv 
This will destroy any data on /dev/mapper/bob_vg-bobs_lv.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/mapper/bob_vg-bobs_lv
Blocksize:                 4096
Filesystem Size:           12974628
Journals:                  4
Resource Groups:           198
Locking Protocol:          lock_dlm
Lock Table:                test:data

All Done
[root@tank-04 ~]# mount -o lockproto=lock_nolock /dev/mapper/bob_vg-bobs_lv /mnt
[root@tank-04 ~]# ls /mnt
[root@tank-04 ~]# umount /mnt
[root@tank-04 ~]# 

Note that this symptom does appear if one of the kernel modules is not
loaded at mount time.

I'm closing this as WORKSFORME.  If this is still a problem, please 
reopen the bug record with information on how to recreate it.

Note You need to log in before you can comment on or make changes to this bug.