Bug 823151 - locks on paths/basenames containing "." in them are not cleared by clear-locks command
Summary: locks on paths/basenames containing "." in them are not cleared by clear-loc...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: krishnan parthasarathi
QA Contact: Shwetha Panduranga
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-05-19 12:48 UTC by Shwetha Panduranga
Modified: 2015-12-01 16:45 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:53:22 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Shwetha Panduranga 2012-05-19 12:48:12 UTC
Description of problem:
If the path or basename contains "." Ex:- dir.1 or file.1 in them , clear-locks doesn't clear any locks(posix, inode, entry) on those paths. 

Version-Release number of selected component (if applicable):
3.3.0qa41

How reproducible:
often

Steps to Reproduce:
1.create a distribute-replicate volume(3X3)
2.create 1 fuse, 1 nfs mount
3.run "for i in {1..1000}; do mkdir dir.$i; sleep 3; done on fuse mount
4.run "for i in {1..1000}; do rm -rf dir.$i; sleep 4; done on nfs mount
5.execute "gluster volume statedump <volumename>

Output of statedump
-------------------
2 xlator.feature.locks.lock-dump.domain.entrylk.entrylk[29](ACTIVE)=type=ENTRYLK_WRLCK on basename=dir.47, pid = 18446744072220106120, owner=88dd38a7007f0000, transport=0x1bc5fd0, , granted at Sat May 19 08:07:32 2012


6.gluster volume clear-locks / kind granted entry dir.47
  
Actual results:
-------------------
Volume clear-locks successful
No locks cleared.
No locks cleared.
No locks cleared.

Expected results:
------------------
Volume clear-locks successful
vol-locks: entry blocked locks=0 granted locks=1
vol-locks: entry blocked locks=0 granted locks=1
vol-locks: entry blocked locks=0 granted locks=1

Additional info:
-----------------

[05/19/12 - 08:35:02 root@AFR-Server1 ~]# gluster v info
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: c0baeeb7-bd41-43a3-b10f-cb5efcb97236
Status: Started
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.16.159.184:/export_b1/dir1
Brick2: 10.16.159.188:/export_b1/dir1
Brick3: 10.16.159.196:/export_b1/dir1
Brick4: 10.16.159.184:/export_c1/dir1
Brick5: 10.16.159.188:/export_c1/dir1
Brick6: 10.16.159.196:/export_c1/dir1
Brick7: 10.16.159.184:/export_d1/dir1
Brick8: 10.16.159.188:/export_d1/dir1
Brick9: 10.16.159.196:/export_d1/dir1
Options Reconfigured:
network.ping-timeout: 1013

Comment 1 Shwetha Panduranga 2012-06-01 11:57:59 UTC
Verified the fix on 3.3.0qa45. Bus is fixed

Output:-
~~~~~~~~

[06/01/12 - 22:50:54 root@APP-SERVER1 ~]# gluster v statedump dstore
Volume statedump successful

[06/01/12 - 22:51:01 root@APP-SERVER1 ~]# grep -i "BLOCK" /tmp/export_sdb-dir1.10088.dump 
xlator.feature.locks.lock-dump.domain.entrylk.entrylk[8](BLOCKED)=type=ENTRYLK_WRLCK on basename=dir.1, pid = 638492820, owner=94a00e26367f0000, transport=0xf0ebf0, , blocked at Fri Jun  1 22:50:45 2012
xlator.feature.locks.lock-dump.domain.entrylk.entrylk[8](BLOCKED)=type=ENTRYLK_WRLCK on basename=dir.1, pid = 638492820, owner=94a00e26367f0000, transport=0xf0ebf0, , blocked at Fri Jun  1 22:50:45 2012

[06/01/12 - 22:51:19 root@APP-SERVER1 ~]# gluster v clear-locks dstore / kind blocked entry dir.1
Volume clear-locks successful
No locks cleared.
dstore-locks: entry blocked locks=1 granted locks=0


[06/01/12 - 22:53:03 root@APP-SERVER1 ~]# gluster v statedump dstore
Volume statedump successful

[06/01/12 - 22:53:10 root@APP-SERVER1 ~]# grep -i "ACTIVE" /tmp/export_sdb-dir1.10088.dump 
xlator.feature.locks.lock-dump.domain.entrylk.entrylk[49](ACTIVE)=type=ENTRYLK_WRLCK on basename=dir.50, pid = 18446744072639651704, owner=789f3ac0b57f0000, transport=0xf0b150, , granted at Fri Jun  1 22:53:03 2012
xlator.feature.locks.lock-dump.domain.entrylk.entrylk[50](ACTIVE)=type=ENTRYLK_WRLCK on basename=dir.51, pid = 18446744072639653788, owner=9ca73ac0b57f0000, transport=0xf0b150, , granted at Fri Jun  1 22:53:06 2012

[06/01/12 - 22:53:22 root@APP-SERVER1 ~]# gluster v clear-locks dstore / kind granted entry dir.51
Volume clear-locks successful
dstore-locks: entry blocked locks=0 granted locks=1
dstore-locks: entry blocked locks=0 granted locks=1

Comment 2 Shwetha Panduranga 2012-06-01 12:00:20 UTC
[06/01/12 - 22:55:07 root@APP-SERVER1 ~]# gluster v clear-locks dstore / kind granted entry 
Volume clear-locks successful
dstore-locks: entry blocked locks=0 granted locks=92
dstore-locks: entry blocked locks=0 granted locks=92


Note You need to log in before you can comment on or make changes to this bug.