Description of problem: Trying to locally mount a gfs filesystem results in # mount -o lockproto=lock_nolock /dev/mapper/test-data /mnt /sbin/mount.gfs: error 19 mounting /dev/mapper/test-data on /mnt Version-Release number of selected component (if applicable): gfs-utils-0.1.11-1.el5 gfs2-utils-0.1.25-1.el5 kernel-xen-2.6.18-8.1.1.el5 How reproducible: always Steps to Reproduce: 1.gfs_mkfs -p lock_dlm -t test:data -j4 /dev/mapper/test-data 2.mount -o lockproto=lock_nolock /dev/mapper/test-data /mnt Actual results: /sbin/mount.gfs: error 19 mounting /dev/mapper/test-data on /mnt Expected results: Should locally mount the filesystem Additional info:
Was the lock_nolock module loaded when you tried to mount?
Yes, I checked that it was loaded. I also continued the cluster setup and found out I was using GFS1. I must had used gfs_mkfs instead of mkfs.gfs2. In fcat the above error also hints to GFS1 instead of gfs2. I nuked the setup, recreated volumes and gfs2 filesystems on a proper cluster and that worked fine. If I umount these filesystems and mount them back with lock_nolock it works. So it may be just GFS1 that doesn't mount with lock_nolock. I'm moving it therefore to gfs-utils, where it belonged. I have no intention of using GFS1 filesystems, so I can't do further testing on GFS1. Should it pop up again in a gfs2 context I'll revisit this bug.
This doesn't happen for me when using code built from the latest cvs tree. I'll see if I can isolate which code fix made it work and make sure it got into the stuff for 5.1.
I cannot recreate this problem at any fix level of RHEL5. I scratch built a clean RHEL5 system and performed the following steps without error: [root@tank-04 ~]# modprobe gfs [root@tank-04 ~]# modprobe gfs2 [root@tank-04 ~]# modprobe lock_nolock [root@tank-04 ~]# uname -a Linux tank-04 2.6.18-8.el5 #1 SMP Fri Jan 26 14:15:21 EST 2007 i686 i686 i386 GNU/Linux [root@tank-04 ~]# pvcreate /dev/sda Physical volume "/dev/sda" successfully created [root@tank-04 ~]# vgcreate bob_vg /dev/sda Volume group "bob_vg" successfully created [root@tank-04 ~]# lvcreate -L 50G bob_vg -n bobs_lv Logical volume "bobs_lv" created [root@tank-04 ~]# gfs_mkfs -p lock_dlm -t test:data -j4 /dev/mapper/bob_vg-bobs_lv This will destroy any data on /dev/mapper/bob_vg-bobs_lv. Are you sure you want to proceed? [y/n] y Device: /dev/mapper/bob_vg-bobs_lv Blocksize: 4096 Filesystem Size: 12974628 Journals: 4 Resource Groups: 198 Locking Protocol: lock_dlm Lock Table: test:data Syncing... All Done [root@tank-04 ~]# mount -o lockproto=lock_nolock /dev/mapper/bob_vg-bobs_lv /mnt [root@tank-04 ~]# ls /mnt [root@tank-04 ~]# umount /mnt [root@tank-04 ~]# Note that this symptom does appear if one of the kernel modules is not loaded at mount time. I'm closing this as WORKSFORME. If this is still a problem, please reopen the bug record with information on how to recreate it.