Bug 1008172 - Running a second gluster command from the same node clears locks held by the first gluster command, even before the first command has completed execution
Running a second gluster command from the same node clears locks held by the ...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Avra Sengupta
:
Depends On:
Blocks: 1008173
  Show dependency treegraph
 
Reported: 2013-09-15 09:06 EDT by Avra Sengupta
Modified: 2014-04-17 07:47 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1008173 (view as bug list)
Environment:
Last Closed: 2014-04-17 07:47:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Avra Sengupta 2013-09-15 09:06:45 EDT
Description of problem:

While a gluster command holding lock is in execution,
any other gluster command which tries to run will fail to
acquire the lock. As a result command#2 will follow the
cleanup code flow, which also includes unlocking the held
locks. As both the commands are run from the same node,
command#2 will end up releasing the locks held by command#1
even before command#1 reaches completion.


Version-Release number of selected component (if applicable):


How reproducible:
Everytime


Steps to Reproduce:
1. Make a gluster command take long time to execute (put in a hack to make it call a script and make the script sleep for 2-3 mins)
2. Meanwhile run another gluster command from the same node. This command will fail to acquire locks but end up releasing the locks already held. 
3. Now command#1 is still in execution, and the locks are released. You can run another command parallely to the first gluster command (still in execution)

Actual results:
Second gluster transaction from the same node releases the locks held by another transaction.


Expected results:
The locks should only be unlocked from the same transaction.
Additional info:
Comment 1 Anand Avati 2013-09-16 07:19:14 EDT
REVIEW: http://review.gluster.org/5937 (glusterd: Adding transaction checks for cluster unlock.) posted (#1) for review on master by Avra Sengupta (asengupt@redhat.com)
Comment 2 Anand Avati 2013-09-17 03:12:47 EDT
REVIEW: http://review.gluster.org/5937 (glusterd: Adding transaction checks for cluster unlock.) posted (#2) for review on master by Avra Sengupta (asengupt@redhat.com)
Comment 3 Anand Avati 2013-09-20 14:49:35 EDT
COMMIT: http://review.gluster.org/5937 committed in master by Anand Avati (avati@redhat.com) 
------
commit 78b0b59285b03af65c10a1fd976836bc5f53c167
Author: Avra Sengupta <asengupt@redhat.com>
Date:   Sun Sep 15 17:55:31 2013 +0530

    glusterd: Adding transaction checks for cluster unlock.
    
    While a gluster command holding lock is in execution,
    any other gluster command which tries to run will fail to
    acquire the lock. As a result command#2 will follow the
    cleanup code flow, which also includes unlocking the held
    locks. As both the commands are run from the same node,
    command#2 will end up releasing the locks held by command#1
    even before command#1 reaches completion.
    
    Now we call the unlock routine in the code path, of the cluster
    has been locked during the same transaction.
    Signed-off-by: Avra Sengupta <asengupt@redhat.com>
    
    Change-Id: I7b7aa4d4c7e565e982b75b8ed1e550fca528c834
    BUG: 1008172
    Signed-off-by: Avra Sengupta <asengupt@redhat.com>
    Reviewed-on: http://review.gluster.org/5937
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Reviewed-by: Anand Avati <avati@redhat.com>
Comment 4 Niels de Vos 2014-04-17 07:47:56 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.