Bug 1402710 - ls and move hung on disperse volume
Summary: ls and move hung on disperse volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On: 1400093
Blocks: 1404572 1412922
TreeView+ depends on / blocked
 
Reported: 2016-12-08 08:53 UTC by Pranith Kumar K
Modified: 2017-03-06 17:38 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.10.0
Clone Of: 1400093
: 1404572 1412922 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:38:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Pranith Kumar K 2016-12-08 08:55:22 UTC
This bug is easily reproducible on plain dispersed volume too without killing any brick.

[root@apandey gluster]# gvi
 
Volume Name: vol
Type: Disperse
Volume ID: c3e903e0-e7b5-42a3-9e75-798c4e3268a0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: apandey:/brick/gluster/vol-1
Brick2: apandey:/brick/gluster/vol-2
Brick3: apandey:/brick/gluster/vol-3
Brick4: apandey:/brick/gluster/vol-4
Brick5: apandey:/brick/gluster/vol-5
Brick6: apandey:/brick/gluster/vol-6
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on


1 - Mount this volume on 3 mount point, c1, c2, and c3
2 - on c1 - mkdir /c1/dir ; cd dir
3 - on c2 touch 4000 files on mount point i.e. "/"
4 - After step 3, start touching next 4000 files on c2 on mount point i.e. "/" 
3 - on c1 start touching 10000 files on /dir/.
4 - on c3 start moving 4000 files created on step 3 from mount point to /dir/
5 - on c3, from different console , start ls in a loop.

All the client will be hanged after sometime (within 3-4 min).


I took statedump and found following inode lock contention - 

lock-dump.domain.domain=dht.file.migrate
inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 12892, owner=c4582de9117f0000, client=0x7f0c800b0430, connection-id=apandey-29101-2016/12/05-07:33:03:610275-vol-client-0-0-0, granted at 2016-12-05 07:37:05
lock-dump.domain.domain=vol-disperse-0

[conn.0.bound_xl./brick/gluster/vol-1.active.2]
gfid=6a7c71ba-9814-41ad-9975-3a7ef4e97d60
nlookup=2528
fd-count=0
ref=9179
ia_type=2

[xlator.features.locks.vol-locks.inode]
path=/dir
mandatory=0
inodelk-count=2
lock-dump.domain.domain=dht.layout.heal
lock-dump.domain.domain=vol-disperse-0
inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 12892, owner=6c20b6d5117f0000, client=0x7f0c800b0430, connection-id=apandey-29101-2016/12/05-07:33:03:610275-vol-client-0-0-0, blocked at 2016-12-05 07:37:05, granted at 2016-12-05 07:37:05
inodelk.inodelk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 12894, owner=e870234b277f0000, client=0x7f0c800ac5f0, connection-id=apandey-29029-2016/12/05-07:32:51:253616-vol-client-0-0-0, blocked at 2016-12-05 07:37:05

[conn.0.bound_xl./brick/gluster/vol-1.active.3]
gfid=00000000-0000-0000-0000-000000000001
nlookup=1
fd-count=1
ref=1
ia_type=2

[xlator.features.locks.vol-locks.inode]
path=/
mandatory=0
inodelk-count=3
lock-dump.domain.domain=dht.layout.heal
lock-dump.domain.domain=vol-disperse-0:self-heal
lock-dump.domain.domain=vol-disperse-0
inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 12885, owner=6c20b6d5117f0000, client=0x7f0c800b0430, connection-id=apandey-29101-2016/12/05-07:33:03:610275-vol-client-0-0-0, blocked at 2016-12-05 07:37:05, granted at 2016-12-05 07:37:05
inodelk.inodelk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 12890, owner=ac4c67f4257f0000, client=0x7f0c800a87b0, connection-id=apandey-28922-2016/12/05-07:32:35:356027-vol-client-0-0-0, blocked at 2016-12-05 07:37:05
inodelk.inodelk[2](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 12892, owner=6825b6d5117f0000, client=0x7f0c800b0430, connection-id=apandey-29101-2016/12/05-07:33:03:610275-vol-client-0-0-0, blocked at 2016-12-05 07:37:05

[conn.0.bound_xl./brick/gluster/vol-1.lru.1]
gfid=00000000-0000-0000-0000-000000000005
nlookup=6
fd-count=0
ref=0
ia_type=2



On first brick log i see following filed unlock request -


[2016-12-05 07:32:58.401408] I [MSGID: 115029] [server-handshake.c:693:server_setvolume] 0-vol-server: accepted client from apandey-29029-2016/12/05-07:32:51:253616-vol-client-0-0-0 (version: 3.8.5)
[2016-12-05 07:33:10.774116] I [login.c:76:gf_auth] 0-auth/login: allowed user names: 0b9b19e0-5b6f-4401-8e1e-53fb21a2fd9d
[2016-12-05 07:33:10.774138] I [MSGID: 115029] [server-handshake.c:693:server_setvolume] 0-vol-server: accepted client from apandey-29101-2016/12/05-07:33:03:610275-vol-client-0-0-0 (version: 3.8.5)
[2016-12-05 07:37:05.212110] E [inodelk.c:404:__inode_unlock_lock] 0-vol-locks:  Matching lock not found for unlock 0-9223372036854775807, by 6825b6d5117f0000 on 0x7f0c800b0430
[2016-12-05 07:37:05.212146] E [MSGID: 115053] [server-rpc-fops.c:278:server_inodelk_cbk] 0-vol-server: 45050: INODELK / (00000000-0000-0000-0000-000000000001), client: apandey-29101-2016/12/05-07:33:03:610275-vol-client-0-0-0, error-xlator: vol-locks [Invalid argument]


only on C3 logs I found this error - 


[2016-12-05 07:37:05.212211] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-vol-client-1: remote operation failed [Invalid argument]
[2016-12-05 07:37:05.212268] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-vol-client-2: remote operation failed [Invalid argument]
[2016-12-05 07:37:05.212315] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-vol-client-3: remote operation failed [Invalid argument]
[2016-12-05 07:37:05.212345] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-vol-client-0: remote operation failed [Invalid argument]
[2016-12-05 07:37:05.212381] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-vol-client-5: remote operation failed [Invalid argument]
[2016-12-05 07:37:05.212387] E [MSGID: 114031] [client-rpc-fops.c:1549:client3_3_inodelk_cbk] 0-vol-client-4: remote operation failed [Invalid argument]
[2016-12-05 07:37:05.212414] W [MSGID: 122015] [ec-common.c:1749:ec_unlocked] 0-vol-disperse-0: entry/inode unlocking failed (RENAME) [Invalid argument]
[2016-12-05 07:37:05.222999] I [MSGID: 109066] [dht-rename.c:1562:dht_rename] 0-vol-dht: renaming /a-2501 (hash=vol-disperse-0/cache=vol-disperse-0) => /dir/a-2501 (hash=vol-disperse-0/cache=<nul>)

--- Additional comment from Pranith Kumar K on 2016-12-06 08:53 EST ---

I found one race which could cause this bug:
i.e. lk-owner changes just before inodelk unlock.
Rename takes two locks 1) On parent directory of the source file 2) On parent directory of the destination file/location. Lk-owner of these two locks is different i.e. the address of the ctx->inode_lock member. After rename, at the time of unlock it sets the lk-owner on the same rename fop frame and unwinds. But it can so happen that just before ec_inodelk is called the owner can be changed to that of the second parent inode's inode_lock. This can cause this problem.

Comment 2 Worker Ant 2016-12-08 16:28:53 UTC
REVIEW: http://review.gluster.org/16074 (cluster/ec: Fix lk-owner set race in ec_unlock) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Worker Ant 2016-12-13 09:30:18 UTC
REVIEW: http://review.gluster.org/16074 (cluster/ec: Fix lk-owner set race in ec_unlock) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Worker Ant 2016-12-13 10:30:43 UTC
REVIEW: http://review.gluster.org/16074 (cluster/ec: Fix lk-owner set race in ec_unlock) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 5 Worker Ant 2016-12-13 15:24:55 UTC
COMMIT: http://review.gluster.org/16074 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit a0a4163ce6a8dd8bb83b60a4484578fadd02c88f
Author: Pranith Kumar K <pkarampu>
Date:   Thu Dec 8 14:53:04 2016 +0530

    cluster/ec: Fix lk-owner set race in ec_unlock
    
    Problem:
    Rename does two locks. There is a case where when it tries to unlock it sends
    xattrop of the directory with new version, callback of these two xattrops can
    be picked up by two separate epoll threads. Both of them will try to set the
    lk-owner for unlock in parallel on the same frame so one of these unlocks will
    fail because the lk-owner doesn't match.
    
    Fix:
    Specify the lk-owner which will be set on inodelk frame which will not be over
    written by any other thread/operation.
    
    BUG: 1402710
    Change-Id: I666ffc931440dc5253d72df666efe0ef1d73f99a
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/16074
    Reviewed-by: Xavier Hernandez <xhernandez>
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 6 Shyamsundar 2017-03-06 17:38:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.