Bug 810502 - ping_pong application hangs on fuse mounts
ping_pong application hangs on fuse mounts
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: locks (Show other bugs)
mainline
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Pranith Kumar K
Shwetha Panduranga
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-04-06 09:15 EDT by Shwetha Panduranga
Modified: 2015-12-01 11:45 EST (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:42:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shwetha Panduranga 2012-04-06 09:15:23 EDT
Description of problem:

Even after all locks (both blocked and active locks) on the ping_pong file are cleared, ping_pong application doesn't exit but hangs on the fuse mounts. 

Before the clear-locks, the replace-brick operation was also initiated. 

Version-Release number of selected component (if applicable):
3.3.0qa33

How reproducible:
often

Steps to Reproduce:
1.create a distribute-replicate volume(3x3).
2.set auth.allow option to <ip_address_of_client)

[04/06/12 - 21:57:30 root@APP-SERVER1 ~]# gluster volume info
 
Volume Name: dstore
Type: Distributed-Replicate
Volume ID: f69cd573-751f-45dd-b741-4bb9caa7cffc
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 192.168.2.35:/export1/dstore1
Brick2: 192.168.2.36:/export1/dstore1
Brick3: 192.168.2.37:/export1/dstore1
Brick4: 192.168.2.35:/export2/dstore1
Brick5: 192.168.2.36:/export2/dstore1
Brick6: 192.168.2.37:/export2/dstore1
Brick7: 192.168.2.35:/export1/dstore2
Brick8: 192.168.2.36:/export1/dstore2
Brick9: 192.168.2.37:/export1/dstore2
Options Reconfigured:
auth.allow: 192.168.2.34

3.create 1 fuse mount from a machine.

4.start 4-5 instances of ping_pong from the same mount ("/usr/sbin/ping_pong ping_pong_file -rw 100 50 300")

5.gluster volume statedump <volume_name>. check if there are blocked locks on ping_pong file

6.while there are blocked locks on ping_pong file execute:
"gluster volume replace-brick <volume_name> <old_brick> <new_brick> start" 
(select the brick which has ping_pong file to replace)

7.gluster volume clear-locks <volume_name> /ping_pong_file kind blocked inode

8.gluster volume replace-brick <volume_name> <old_brick> <new_brick> commit

9.gluster volume clear-locks <volume_name> /ping_pong_file kind blocked posix

10.gluster volume clear-locks  <volume_name> /ping_pong_file kind all posix
Comment 1 Anand Avati 2012-05-07 03:46:58 EDT
CHANGE: http://review.gluster.com/3221 (cluster/afr: Perform Flush with lk-owner given by parent xlator.) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 2 Anand Avati 2012-05-07 03:48:55 EDT
CHANGE: http://review.gluster.com/3228 (cluster/afr: Fix inodelk-trace logs to print lk-owners) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 3 Shwetha Panduranga 2012-05-14 01:50:17 EDT
This bug still exists on 3.3.0qa41
Comment 4 Anand Avati 2012-05-18 22:04:06 EDT
CHANGE: http://review.gluster.com/3365 (features/locks: insert_and_merge should not operate on blocked locks) merged in master by Anand Avati (avati@redhat.com)
Comment 5 Anand Avati 2012-05-18 22:04:38 EDT
CHANGE: http://review.gluster.com/3366 (features/locks: Don't delete blocked locks in pl_flush) merged in master by Anand Avati (avati@redhat.com)
Comment 6 Shwetha Panduranga 2012-05-24 08:02:48 EDT
Bug is fixed. Verified on 3.3.0qa43

Note You need to log in before you can comment on or make changes to this bug.