Hide Forgot
Description of problem: vol type :- 6x2 present volume info Volume Name: lockvol Type: Distributed-Replicate Volume ID: 5f0b92a8-99ec-4de0-8cae-198fc54c2b06 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.213:/rhs/bricks/d1r1-2 Brick2: 10.70.37.145:/rhs/bricks/d1r2-2 Brick3: 10.70.37.68:/rhs/bricks/d2r1-2 Brick4: 10.70.37.76:/rhs/bricks/d2r2-2 Brick5: 10.70.37.213:/rhs/bricks/d3r1-2 Brick6: 10.70.37.145:/rhs/bricks/d3r2-2 Brick7: 10.70.37.68:/rhs/bricks/d4r1-2 Brick8: 10.70.37.76:/rhs/bricks/d4r2-2 Brick9: 10.70.37.213:/rhs/bricks/d5r1-2 Brick10: 10.70.37.145:/rhs/bricks/d5r2-2 Brick11: 10.70.37.68:/rhs/bricks/d6r1-2 Brick12: 10.70.37.76:/rhs/bricks/d6r2-2 Options Reconfigured: features.quota: off storage.batch-fsync-delay-usec: 0 Version-Release number of selected component (if applicable): glusterfs-3.4.0.30rhs-2.el6rhs.x86_64 client, RHEL 6.4 How reproducible: seeing on this build Steps to Reproduce: 1. volume 6x2 2. mount on a client using nfs 3. ping_pong a 5, from two instances of same mount on same client and one more on a different client. 4. enable quota 5. ctrl+c to stop 3rd instance of ping_pong. 7. ctrl+c first two instances of ping_pong 8. start ping_pong test again this time atleat one instance 9. disable quota. and retry step 8 Actual results: step 3, works fine. step 4, works fine. ping_pong on all three instances works fine. step 5, the ping_pong does not change values on first two instances step 8, it does not respond with intended output like "locks/sec" step 9, the result is same as step8. Expected results: ping pong should have worked properly in all the mentioned conditions. Additional info: nfs.log doesn't point out any errors all through the sequence of operations.
Saurabh, I am not able to reproduce the bug in my setup. ping_pong works as expected. Couple of observations: * enabling/disabling quota had no effect on lock rate * when only third instance of ping_pong is killed, lock rate doesn't shoot up significantly. However, when 2 out of 3 instances is killed, the remaining instance achieves locking rate equal to what a single instance of ping_pong achieves. * I am using the same version of glusterfs as you. Can you please confirm this is the behaviour in your setup too? regards, Raghavendra.
Tried to reproduce the issue in my setup (rhs-2.1). Ping-pong worked as expected. It passed both lock coherency and I/O coherency test.
Closing the bug as not reproducible in current release-glusterfs3.4.0.35rhs.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html