+++ This bug was initially created as a clone of Bug #1414663 +++
Description of problem:
Cthon Lock test is failing on NFS-Ganesha V3
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Setup 4 node ganesha cluster on Rhel6
2.Create 4*2 Distributed-Replicate Volume.Enable ganesha on it.
3.Export the Volume to Client via V3 on Rhel7 Client
4.Run cthon lock test suite
# cd /root/cthon04 && ./server -l -o vers=3 -p /replicaVol -m /mnt/rhel6/mountpoint/ -N 1 dhcp37-106.lab.eng.blr.redhat.com
sh ./runtests -l -t /mnt/rhel6/mountpoint//dhcp37-192.test
Starting LOCKING tests: test directory /mnt/rhel6/mountpoint//dhcp37-192.test (arg: -t)
Testing native post-LFS locking
Creating parent/child synchronization pipes.
Test #1 - Test regions of an unlocked file.
Parent: 1.1 - F_TEST [ 0, 1] PASSED.
Parent: 1.2 - F_TEST [ 0, ENDING] PASSED.
Parent: 1.3 - F_TEST [ 0,7fffffffffffffff] PASSED.
Parent: 1.4 - F_TEST [ 1, 1] PASSED.
Parent: 1.5 - F_TEST [ 1, ENDING] PASSED.
Parent: 1.6 - F_TEST [ 1,7fffffffffffffff] PASSED.
Parent: 1.7 - F_TEST [7fffffffffffffff, 1] PASSED.
Parent: 1.8 - F_TEST [7fffffffffffffff, ENDING] PASSED.
Parent: 1.9 - F_TEST [7fffffffffffffff,7fffffffffffffff] PASSED.
Test #2 - Try to lock the whole file.
Parent: 2.0 - F_TLOCK [ 0, ENDING] FAILED!
Parent: **** Expected success, returned errno=37...
Parent: **** Probably implementation error.
** PARENT pass 1 results: 9/9 pass, 0/0 warn, 1/1 fail (pass/total).
** CHILD pass 1 results: 0/0 pass, 0/0 warn, 0/0 fail (pass/total).
lock tests failed
Tests failed, leaving /mnt/rhel6/mountpoint/ mounted
Cthon lock test suit is failing with V3 and Passing with V4
Cthon Lock test suit should pass with V3
--- Additional comment from Red Hat Bugzilla Rules Engine on 2017-01-19 02:52:33 EST ---
This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'.
If this bug should be proposed for a different release, please manually change the proposed release flag.
--- Additional comment from Soumya Koduri on 2017-01-19 02:57:35 EST ---
Jan 18 19:35:17 dhcp47-74 rpc.statd: Failed to insert: creating /var/lib/nfs/statd/sm/dhcp37-192.lab.eng.blr.redhat.com: Permission denied
rpc.statd: STAT_FAIL to dhcp47-74.lab.eng.blr.redhat.com for SM_MON of dhcp37-192.lab.eng.blr.redhat.com
It seems to be Day-1 issue. statd process gets started as rpcuser. In the shared_storage nfs-ganesha folder, we need to chown the directories/files used by statd to 'rpcuser'. Will send in a fix upstream for review.
The work-around is to either manually chown those files before setting up nfs-ganesha or restart statd process.
REVIEW: http://review.gluster.org/16433 (common-ha: All statd related files need to be owned by rpcuser) posted (#1) for review on release-3.9 by soumya k (firstname.lastname@example.org)
REVIEW: http://review.gluster.org/16433 (common-ha: All statd related files need to be owned by rpcuser) posted (#2) for review on release-3.9 by soumya k (email@example.com)
COMMIT: http://review.gluster.org/16433 committed in release-3.9 by Kaleb KEITHLEY (firstname.lastname@example.org)
Author: Soumya Koduri <email@example.com>
Date: Thu Jan 19 15:01:12 2017 +0530
common-ha: All statd related files need to be owned by rpcuser
Statd service is started as rpcuser by default. Hence the
files/directories needed by it under '/var/lib/nfs' should be
owned by the same user.
Note: This change is not in mainline as the cluster-bits
are being moved to storehaug project -
Signed-off-by: Soumya Koduri <firstname.lastname@example.org>
Reviewed-by: jiffin tony Thottan <email@example.com>
Smoke: Gluster Build System <firstname.lastname@example.org>
Tested-by: Kaleb KEITHLEY <email@example.com>
NetBSD-regression: NetBSD Build System <firstname.lastname@example.org>
CentOS-regression: Gluster Build System <email@example.com>
This bug is getting closed because GlusterFS-3.9 has reached its end-of-life .
Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please open a new bug against the newer release.