Bug 1496326 - [GNFS+EC] lock is being granted to 2 different client for the same data range at a time after performing lock acquire/release from the clients1
Summary: [GNFS+EC] lock is being granted to 2 different client for the same data range...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: locks
Version: 3.12
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1472961
Blocks: glusterfs-3.12.2
TreeView+ depends on / blocked
 
Reported: 2017-09-27 05:21 UTC by Pranith Kumar K
Modified: 2017-10-13 12:46 UTC (History)
14 users (show)

Fixed In Version: glusterfs-glusterfs-3.12.2
Clone Of: 1472961
Environment:
Last Closed: 2017-10-13 12:46:54 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Pranith Kumar K 2017-09-27 05:22:24 UTC
Description of problem:
When the lock is taken by client1 and the other i.e client2 tries to take the lock,the lock is being held(blocked) for client2 as it is already granted to client1

Now release the lock from client1.Lock got granted to client 2.
Now again try taking lock from client1.Lock is granted,which should not.As the file is already being locked by client 2.

How reproducible:
Consistently

Steps to Reproduce:
1.Create disperseVol 2 x (4 + 2) and Enable MDCache and GNFS on it
2.Mount the volume from single server to 2 different client
3.Create 512 Bytes of file from 1 client on mount point
4.Take lock from client 1.Lock is acquired
5.Try taking lock from client 2.Lock is blocked (as already being taken by client 1)
6.Release lock from client1.Take lock from client2
7.Again try taking lock from client 1.

Actual results:
Lock is being granted to client1

Expected results:
Lock should not be granted to client 1 as lock is currently being held by client 2

Comment 2 Worker Ant 2017-09-27 05:23:16 UTC
REVIEW: https://review.gluster.org/18403 (features/locks: Maintain separation of lock->client_pid, flock->l_pid) posted (#1) for review on release-3.12 by Pranith Kumar Karampuri (pkarampu)

Comment 3 Worker Ant 2017-10-06 06:30:39 UTC
COMMIT: https://review.gluster.org/18403 committed in release-3.12 by jiffin tony Thottan (jthottan) 
------
commit 95a32b68d03ba34f3e0d6614cab3894351e2e15b
Author: Pranith Kumar K <pkarampu>
Date:   Fri Sep 22 12:50:43 2017 +0530

    features/locks: Maintain separation of lock->client_pid, flock->l_pid
    
    Problem:
    grant_blocked_locks() constructs flock from lock. Locks xlator uses
    frame->root->pid interchangeably flock->l_pid. With gNFS frame->root->pid
    (which translates to lock->client_pid) is not same as flock->l_pid, this leads
    to lk's cbk returning flock with l_pid from lock->client_pid instead of input
    flock->l_pid. This triggers EC's error code path leading to failure of lk call,
    because the response' flock->l_pid is different from request's flock->l_pid.
    
    Fix:
    Maintain separation of lock->client_pid, flock->l_pid. Always unwind with
    flock with correct pid.
    
     >BUG: 1472961
     >Change-Id: Ifab35c458662cf0082b902f37782f8c5321d823d
     >Signed-off-by: Pranith Kumar K <pkarampu>
     >(cherry picked from commit 572b4bf889d903dcaed49a57a75270a763dc259d)
    
    BUG: 1496326
    Change-Id: Ifab35c458662cf0082b902f37782f8c5321d823d
    Signed-off-by: Pranith Kumar K <pkarampu>

Comment 4 Jiffin 2017-10-13 12:46:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.12.2, please open a new bug report.

glusterfs-glusterfs-3.12.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-October/032684.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.