How reproducible: Consistently Steps to Reproduce: 1.Create disperseVol 2 x (4 + 2) and Enable MDCache and GNFS on it 2.Mount the volume from single server to 2 different client 3.Create 512 Bytes of file from 1 client on mount point 4.Take lock from client 1.Lock is acquired 5.Try taking lock from client 2.Lock is blocked (as already being taken by client 1) 6.Release lock from client1.Take lock from client2 7.Again try taking lock from client 1. Actual results: Lock is being granted to client1 Expected results: Lock should not be granted to client 1 as lock is currently being held by client 2
REVIEW: https://review.gluster.org/17826 (features/locks: Return correct flock structure in grant_blocked_locks) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: https://review.gluster.org/17826 (features/locks: Maintain separation of lock->client_pid, flock->l_pid) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: https://review.gluster.org/17826 (features/locks: Maintain separation of lock->client_pid, flock->l_pid) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: https://review.gluster.org/17826 (features/locks: Maintain separation of lock->client_pid, flock->l_pid) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: https://review.gluster.org/17826 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 572b4bf889d903dcaed49a57a75270a763dc259d Author: Pranith Kumar K <pkarampu> Date: Fri Sep 22 12:50:43 2017 +0530 features/locks: Maintain separation of lock->client_pid, flock->l_pid Problem: grant_blocked_locks() constructs flock from lock. Locks xlator uses frame->root->pid interchangeably flock->l_pid. With gNFS frame->root->pid (which translates to lock->client_pid) is not same as flock->l_pid, this leads to lk's cbk returning flock with l_pid from lock->client_pid instead of input flock->l_pid. This triggers EC's error code path leading to failure of lk call, because the response' flock->l_pid is different from request's flock->l_pid. Fix: Maintain separation of lock->client_pid, flock->l_pid. Always unwind with flock with correct pid. BUG: 1472961 Change-Id: Ifab35c458662cf0082b902f37782f8c5321d823d Signed-off-by: Pranith Kumar K <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report. glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html [2] https://www.gluster.org/pipermail/gluster-users/