Bug 802515 - Enable eager-locking in AFR
Summary: Enable eager-locking in AFR
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: pre-release
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact: Shwetha Panduranga
URL:
Whiteboard:
Depends On:
Blocks: 817967 895528
TreeView+ depends on / blocked
 
Reported: 2012-03-12 17:35 UTC by Pranith Kumar K
Modified: 2013-07-24 17:25 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:25:46 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2012-03-12 17:35:50 UTC
Description of problem:
Enable eager-locking in afr

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2012-03-18 06:31:44 UTC
CHANGE: http://review.gluster.com/2925 (cluster/afr: Enable eager-lock) merged in master by Anand Avati (avati)

Comment 2 Pranith Kumar K 2012-06-01 12:14:51 UTC
Check these 2 commit descriptions:
http://review.gluster.com/240
http://review.gluster.com/2925

Comment 3 Shwetha Panduranga 2012-06-06 11:41:04 UTC
Running locks tests on two different mounts simultaneously passed successfully.

Testing http://review.gluster.com/#change,240

Ouput:-
---------
[06/06/12 - 07:12:38 root@AFR-Server1 ~]# gluster v info
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: b2f7f458-598e-456f-af7c-aa5af0036393
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.16.159.184:/export_b1/dir1
Brick2: 10.16.159.188:/export_b1/dir1
Brick3: 10.16.159.196:/export_b1/dir1
Brick4: 10.16.159.184:/export_c1/dir1
Brick5: 10.16.159.188:/export_c1/dir1
Brick6: 10.16.159.196:/export_c1/dir1
Brick7: 10.16.159.184:/export_d1/dir1
Brick8: 10.16.159.188:/export_d1/dir1
Brick9: 10.16.159.196:/export_d1/dir1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
performance.write-behind: on
cluster.eager-lock: on


[06/06/12 - 07:32:40 root@ARF-Client1 gfsc1]# /root/locks/locktests -n 1 -f ./file11
Init
process initalization
..
--------------------------------------

TEST : TRY TO WRITE ON A READ  LOCK:=
TEST : TRY TO WRITE ON A WRITE LOCK:=
TEST : TRY TO READ  ON A READ  LOCK:=
TEST : TRY TO READ  ON A WRITE LOCK:=
TEST : TRY TO SET A READ  LOCK ON A READ  LOCK:=
TEST : TRY TO SET A WRITE LOCK ON A WRITE LOCK:x
TEST : TRY TO SET A WRITE LOCK ON A READ  LOCK:x
TEST : TRY TO SET A READ  LOCK ON A WRITE LOCK:x
TEST : TRY TO READ LOCK THE WHOLE FILE BYTE BY BYTE:=
TEST : TRY TO WRITE LOCK THE WHOLE FILE BYTE BY BYTE:=

process number : 1
process number running test successfully :
1 process of 1 successfully ran test : WRITE ON A READ  LOCK
1 process of 1 successfully ran test : WRITE ON A WRITE LOCK
1 process of 1 successfully ran test : READ  ON A READ  LOCK
1 process of 1 successfully ran test : READ  ON A WRITE LOCK
1 process of 1 successfully ran test : SET A READ  LOCK ON A READ  LOCK
1 process of 1 successfully ran test : SET A WRITE LOCK ON A WRITE LOCK
1 process of 1 successfully ran test : SET A WRITE LOCK ON A READ  LOCK
1 process of 1 successfully ran test : SET A READ  LOCK ON A WRITE LOCK
1 process of 1 successfully ran test : READ LOCK THE WHOLE FILE BYTE BY BYTE
1 process of 1 successfully ran test : WRITE LOCK THE WHOLE FILE BYTE BY BYTE


[06/06/12 - 07:32:49 root@AFR-Client2 gfsc1]# /root/locks/locktests -n 1 -f ./file11
Init
process initalization
..
--------------------------------------

TEST : TRY TO WRITE ON A READ  LOCK:=
TEST : TRY TO WRITE ON A WRITE LOCK:=
TEST : TRY TO READ  ON A READ  LOCK:=
TEST : TRY TO READ  ON A WRITE LOCK:=
TEST : TRY TO SET A READ  LOCK ON A READ  LOCK:=
TEST : TRY TO SET A WRITE LOCK ON A WRITE LOCK:x
TEST : TRY TO SET A WRITE LOCK ON A READ  LOCK:x
TEST : TRY TO SET A READ  LOCK ON A WRITE LOCK:x
TEST : TRY TO READ LOCK THE WHOLE FILE BYTE BY BYTE:=
TEST : TRY TO WRITE LOCK THE WHOLE FILE BYTE BY BYTE:=

process number : 1
process number running test successfully :
1 process of 1 successfully ran test : WRITE ON A READ  LOCK
1 process of 1 successfully ran test : WRITE ON A WRITE LOCK
1 process of 1 successfully ran test : READ  ON A READ  LOCK
1 process of 1 successfully ran test : READ  ON A WRITE LOCK
1 process of 1 successfully ran test : SET A READ  LOCK ON A READ  LOCK
1 process of 1 successfully ran test : SET A WRITE LOCK ON A WRITE LOCK
1 process of 1 successfully ran test : SET A WRITE LOCK ON A READ  LOCK
1 process of 1 successfully ran test : SET A READ  LOCK ON A WRITE LOCK
1 process of 1 successfully ran test : READ LOCK THE WHOLE FILE BYTE BY BYTE
1 process of 1 successfully ran test : WRITE LOCK THE WHOLE FILE BYTE BY BYTE


Mount output on client1:-
-----------------------
[2012-06-06 07:32:57.106065] D [afr-transaction.c:1026:afr_post_nonblocking_inodelk_cbk] 0-vol-replicate-2: Non blocking inodelks done. Proceeding to FOP
[2012-06-06 07:32:57.107935] D [afr-transaction.c:1026:afr_post_nonblocking_inodelk_cbk] 0-vol-replicate-2: Non blocking inodelks done. Proceeding to FOP
[2012-06-06 07:32:57.108916] D [afr-common.c:704:afr_get_call_child] 0-vol-replicate-2: Returning 0, call_child: 2, last_index: -1
[2012-06-06 07:32:57.109532] D [afr-lk-common.c:403:transaction_lk_op] 0-vol-replicate-2: lk op is for a transaction
[2012-06-06 07:32:57.109725] D [afr-lk-common.c:403:transaction_lk_op] 0-vol-replicate-2: lk op is for a transaction
[2012-06-06 07:32:57.109883] D [afr-transaction.c:1019:afr_post_nonblocking_inodelk_cbk] 0-vol-replicate-2: Non blocking inodelks failed. Proceeding to blocking
[2012-06-06 07:32:57.110535] D [afr-lk-common.c:403:transaction_lk_op] 0-vol-replicate-2: lk op is for a transaction
[2012-06-06 07:32:57.111019] D [write-behind.c:1392:wb_open_cbk] 0-vol-write-behind: disabling wb on 0x1154624
[2012-06-06 07:32:57.111153] D [write-behind.c:1349:wb_disable_all] 0-vol-write-behind: disabling wb on 0x11545c0 because 0x1154624 is O_SYNC
[2012-06-06 07:32:57.111269] D [write-behind.c:1349:wb_disable_all] 0-vol-write-behind: disabling wb on 0x115455c because 0x1154624 is O_SYNC
[2012-06-06 07:32:57.114398] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-7: sending release on fd
[2012-06-06 07:32:57.114524] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-6: sending release on fd
[2012-06-06 07:32:57.114638] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-8: sending release on fd
[2012-06-06 07:32:57.115111] D [afr-lk-common.c:403:transaction_lk_op] 0-vol-replicate-2: lk op is for a transaction
[2012-06-06 07:32:57.115975] D [afr-transaction.c:1019:afr_post_nonblocking_inodelk_cbk] 0-vol-replicate-2: Non blocking inodelks failed. Proceeding to blocking
[2012-06-06 07:32:57.116881] D [afr-lk-common.c:1025:afr_lock_blocking] 0-vol-replicate-2: we're done locking
[2012-06-06 07:32:57.116946] D [afr-transaction.c:999:afr_post_blocking_inodelk_cbk] 0-vol-replicate-2: Blocking inodelks done. Proceeding to FOP
[2012-06-06 07:32:57.117529] D [afr-lk-common.c:403:transaction_lk_op] 0-vol-replicate-2: lk op is for a transaction
[2012-06-06 07:32:57.118178] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-8: sending release on fd
[2012-06-06 07:32:57.118297] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-7: sending release on fd
[2012-06-06 07:32:57.118447] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-6: sending release on fd
[2012-06-06 07:32:57.119619] D [afr-lk-common.c:1025:afr_lock_blocking] 0-vol-replicate-2: we're done locking
[2012-06-06 07:32:57.119684] D [afr-transaction.c:999:afr_post_blocking_inodelk_cbk] 0-vol-replicate-2: Blocking inodelks done. Proceeding to FOP
[2012-06-06 07:32:57.120484] D [afr-lk-common.c:403:transaction_lk_op] 0-vol-replicate-2: lk op is for a transaction
[2012-06-06 07:32:57.121103] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-8: sending release on fd
[2012-06-06 07:32:57.121222] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-6: sending release on fd
[2012-06-06 07:32:57.121330] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-vol-client-7: sending release on fd

Comment 4 Shwetha Panduranga 2012-06-06 11:58:22 UTC
Turning off write-behind when eager-lock is on is not a valid configuration

Testing http://review.gluster.com/#change,2925

Output:-
----------
[06/06/12 - 07:44:15 root@AFR-Server1 ~]# gluster v info
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: b2f7f458-598e-456f-af7c-aa5af0036393
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.16.159.184:/export_b1/dir1
Brick2: 10.16.159.188:/export_b1/dir1
Brick3: 10.16.159.196:/export_b1/dir1
Brick4: 10.16.159.184:/export_c1/dir1
Brick5: 10.16.159.188:/export_c1/dir1
Brick6: 10.16.159.196:/export_c1/dir1
Brick7: 10.16.159.184:/export_d1/dir1
Brick8: 10.16.159.188:/export_d1/dir1
Brick9: 10.16.159.196:/export_d1/dir1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
performance.write-behind: on
cluster.eager-lock: on

[06/06/12 - 07:44:17 root@AFR-Server1 ~]# gluster v set vol write-behind off
performance.write-behind off and cluster.eager-lock on is not valid configuration
Set volume unsuccessful

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Valid Configurations:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Case1:- eager-lock off and write-behind on
--------------------------------------------
[06/06/12 - 07:44:14 root@AFR-Server1 ~]# gluster v set vol write-behind on
Set volume successful
[06/06/12 - 07:44:24 root@AFR-Server1 ~]# gluster v set vol cluster.eager-lock off
Set volume successful
[06/06/12 - 07:45:32 root@AFR-Server1 ~]# gluster v info

Volume Name: vol
Type: Distributed-Replicate
Volume ID: b2f7f458-598e-456f-af7c-aa5af0036393
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.16.159.184:/export_b1/dir1
Brick2: 10.16.159.188:/export_b1/dir1
Brick3: 10.16.159.196:/export_b1/dir1
Brick4: 10.16.159.184:/export_c1/dir1
Brick5: 10.16.159.188:/export_c1/dir1
Brick6: 10.16.159.196:/export_c1/dir1
Brick7: 10.16.159.184:/export_d1/dir1
Brick8: 10.16.159.188:/export_d1/dir1
Brick9: 10.16.159.196:/export_d1/dir1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
performance.write-behind: on
cluster.eager-lock: off

Case 2 :-eager-lock on and write-behind on
-------------------------------------------
[06/06/12 - 07:48:04 root@AFR-Server1 ~]# gluster v set vol write-behind on
Set volume successful
[06/06/12 - 07:48:15 root@AFR-Server1 ~]# gluster v set vol eager-lock on
Set volume successful
[06/06/12 - 07:48:41 root@AFR-Server1 ~]# gluster v info
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: b2f7f458-598e-456f-af7c-aa5af0036393
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.16.159.184:/export_b1/dir1
Brick2: 10.16.159.188:/export_b1/dir1
Brick3: 10.16.159.196:/export_b1/dir1
Brick4: 10.16.159.184:/export_c1/dir1
Brick5: 10.16.159.188:/export_c1/dir1
Brick6: 10.16.159.196:/export_c1/dir1
Brick7: 10.16.159.184:/export_d1/dir1
Brick8: 10.16.159.188:/export_d1/dir1
Brick9: 10.16.159.196:/export_d1/dir1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
performance.write-behind: on
cluster.eager-lock: on

Case 3:-eager-lock off and write-behind off
--------------------------------------------

[06/06/12 - 07:50:28 root@AFR-Server1 ~]# gluster v set vol eager-lock off
Set volume successful
[06/06/12 - 07:50:34 root@AFR-Server1 ~]# gluster v set vol write-behind off
Set volume successful
[06/06/12 - 07:50:44 root@AFR-Server1 ~]# gluster v info
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: b2f7f458-598e-456f-af7c-aa5af0036393
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.16.159.184:/export_b1/dir1
Brick2: 10.16.159.188:/export_b1/dir1
Brick3: 10.16.159.196:/export_b1/dir1
Brick4: 10.16.159.184:/export_c1/dir1
Brick5: 10.16.159.188:/export_c1/dir1
Brick6: 10.16.159.196:/export_c1/dir1
Brick7: 10.16.159.184:/export_d1/dir1
Brick8: 10.16.159.188:/export_d1/dir1
Brick9: 10.16.159.196:/export_d1/dir1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
performance.write-behind: off
cluster.eager-lock: off

Comment 5 Shwetha Panduranga 2012-06-08 13:48:44 UTC
Verified the feature on 3.3.0qa45. 

The test cases written for eager-lock testing are verified by pranith and the tests are executed on fuse mount with replica count 3. i.e. 1x3 replicate volume.


Note You need to log in before you can comment on or make changes to this bug.