Bug 1472961 - [GNFS+EC] lock is being granted to 2 different client for the same data range at a time after performing lock acquire/release from the clients1
Summary: [GNFS+EC] lock is being granted to 2 different client for the same data range...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: locks
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1496326
TreeView+ depends on / blocked
 
Reported: 2017-07-19 16:56 UTC by Pranith Kumar K
Modified: 2017-12-08 17:34 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.13.0
Clone Of: 1411338
: 1496326 (view as bug list)
Environment:
Last Closed: 2017-12-08 17:34:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Pranith Kumar K 2017-07-19 16:59:35 UTC
How reproducible:
Consistently

Steps to Reproduce:
1.Create disperseVol 2 x (4 + 2) and Enable MDCache and GNFS on it
2.Mount the volume from single server to 2 different client
3.Create 512 Bytes of file from 1 client on mount point
4.Take lock from client 1.Lock is acquired
5.Try taking lock from client 2.Lock is blocked (as already being taken by client 1)
6.Release lock from client1.Take lock from client2
7.Again try taking lock from client 1.

Actual results:
Lock is being granted to client1

Expected results:
Lock should not be granted to client 1 as lock is currently being held by client 2

Comment 2 Worker Ant 2017-07-19 17:04:09 UTC
REVIEW: https://review.gluster.org/17826 (features/locks: Return correct flock structure in grant_blocked_locks) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Worker Ant 2017-09-22 08:36:12 UTC
REVIEW: https://review.gluster.org/17826 (features/locks: Maintain separation of lock->client_pid, flock->l_pid) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Worker Ant 2017-09-22 08:42:25 UTC
REVIEW: https://review.gluster.org/17826 (features/locks: Maintain separation of lock->client_pid, flock->l_pid) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 5 Worker Ant 2017-09-26 10:24:42 UTC
REVIEW: https://review.gluster.org/17826 (features/locks: Maintain separation of lock->client_pid, flock->l_pid) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 6 Worker Ant 2017-09-27 05:20:38 UTC
COMMIT: https://review.gluster.org/17826 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 572b4bf889d903dcaed49a57a75270a763dc259d
Author: Pranith Kumar K <pkarampu>
Date:   Fri Sep 22 12:50:43 2017 +0530

    features/locks: Maintain separation of lock->client_pid, flock->l_pid
    
    Problem:
    grant_blocked_locks() constructs flock from lock. Locks xlator uses
    frame->root->pid interchangeably flock->l_pid. With gNFS frame->root->pid
    (which translates to lock->client_pid) is not same as flock->l_pid, this leads
    to lk's cbk returning flock with l_pid from lock->client_pid instead of input
    flock->l_pid. This triggers EC's error code path leading to failure of lk call,
    because the response' flock->l_pid is different from request's flock->l_pid.
    
    Fix:
    Maintain separation of lock->client_pid, flock->l_pid. Always unwind with
    flock with correct pid.
    
    BUG: 1472961
    Change-Id: Ifab35c458662cf0082b902f37782f8c5321d823d
    Signed-off-by: Pranith Kumar K <pkarampu>

Comment 7 Shyamsundar 2017-12-08 17:34:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.