Bug 1008172 - Running a second gluster command from the same node clears locks held by the first gluster command, even before the first command has completed execution
Summary: Running a second gluster command from the same node clears locks held by the ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1008173
TreeView+ depends on / blocked
 
Reported: 2013-09-15 13:06 UTC by Avra Sengupta
Modified: 2014-04-17 11:47 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1008173 (view as bug list)
Environment:
Last Closed: 2014-04-17 11:47:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Avra Sengupta 2013-09-15 13:06:45 UTC
Description of problem:

While a gluster command holding lock is in execution,
any other gluster command which tries to run will fail to
acquire the lock. As a result command#2 will follow the
cleanup code flow, which also includes unlocking the held
locks. As both the commands are run from the same node,
command#2 will end up releasing the locks held by command#1
even before command#1 reaches completion.


Version-Release number of selected component (if applicable):


How reproducible:
Everytime


Steps to Reproduce:
1. Make a gluster command take long time to execute (put in a hack to make it call a script and make the script sleep for 2-3 mins)
2. Meanwhile run another gluster command from the same node. This command will fail to acquire locks but end up releasing the locks already held. 
3. Now command#1 is still in execution, and the locks are released. You can run another command parallely to the first gluster command (still in execution)

Actual results:
Second gluster transaction from the same node releases the locks held by another transaction.


Expected results:
The locks should only be unlocked from the same transaction.
Additional info:

Comment 1 Anand Avati 2013-09-16 11:19:14 UTC
REVIEW: http://review.gluster.org/5937 (glusterd: Adding transaction checks for cluster unlock.) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 2 Anand Avati 2013-09-17 07:12:47 UTC
REVIEW: http://review.gluster.org/5937 (glusterd: Adding transaction checks for cluster unlock.) posted (#2) for review on master by Avra Sengupta (asengupt)

Comment 3 Anand Avati 2013-09-20 18:49:35 UTC
COMMIT: http://review.gluster.org/5937 committed in master by Anand Avati (avati) 
------
commit 78b0b59285b03af65c10a1fd976836bc5f53c167
Author: Avra Sengupta <asengupt>
Date:   Sun Sep 15 17:55:31 2013 +0530

    glusterd: Adding transaction checks for cluster unlock.
    
    While a gluster command holding lock is in execution,
    any other gluster command which tries to run will fail to
    acquire the lock. As a result command#2 will follow the
    cleanup code flow, which also includes unlocking the held
    locks. As both the commands are run from the same node,
    command#2 will end up releasing the locks held by command#1
    even before command#1 reaches completion.
    
    Now we call the unlock routine in the code path, of the cluster
    has been locked during the same transaction.
    Signed-off-by: Avra Sengupta <asengupt>
    
    Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834
    BUG: 1008172
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/5937
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Reviewed-by: Anand Avati <avati>

Comment 4 Niels de Vos 2014-04-17 11:47:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.