Bug 1568348

Summary: Rebalance on few nodes doesn't seem to complete - stuck at FUTEX_WAIT
Product: [Community] GlusterFS Reporter: Nithya Balachandran <nbalacha>
Component: distributeAssignee: Nithya Balachandran <nbalacha>
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, rhs-bugs, storage-qa-internal, tdesala
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: glusterfs-v4.1.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1565119
: 1570475 1570476 (view as bug list) Environment:
Last Closed: 2018-06-20 18:04:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1565119    
Bug Blocks: 1570475, 1570476    

Description Nithya Balachandran 2018-04-17 10:05:22 UTC
Concurrent directory renames and fix-layouts can deadlock.

Steps to reproduce this with upstream master:

1. Create a 5 brick pure distribute volume 
2. Mount the volume on 2 different mount points (/mnt/1 and /mnt2)
3. From /mnt/1 create 3 levels of directories (mkdir -p d0/d1/d2)
4. Add bricks to the volume 
5. Gdb into the mount point process for /mnt/1 and set a breakpoint at dht_rename_dir_lock1_cbk
6. From /mnt/1 run 'mv d0/d1 d0/d1_a"
In this particular example, the name of the hashed subvol of the /d0/d1 is alphabetically greater than that of /d0/d1_a
[2018-04-17 09:24:02.267020] I [MSGID: 109066] [dht-rename.c:1751:dht_rename] 2-dlock-dht: renaming /d0/d1 (hash=dlock-client-3/cache=dlock-client-0) => /d0/d1_a (hash=dlock-client-0/cache=<nul>)

7. Once the breakpoint is hit in the /mnt/1 process, run the following on the other mount point /mnt/2

setfattr -n "distribute.fix.layout" -v "1" d0

8.Allow gdb to continue

Both processes are now deadlocked.

[root@rhgs313-6 ~]# gluster v create dlock server1:/bricks/brick1/deadlock-{1..5} force
volume create: dlock: success: please start the volume to access data
[root@rhgs313-6 ~]# gluster v start dlock
volume start: dlock: success
[root@rhgs313-6 ~]# mount -t glusterfs -s server1:dlock /mnt/fuse1
[root@rhgs313-6 ~]# mount -t glusterfs -s server1:dlock /mnt/fuse2
[root@rhgs313-6 ~]# cd /mnt/fuse1
[root@rhgs313-6 fuse1]# l
total 0
[root@rhgs313-6 fuse1]# 
[root@rhgs313-6 fuse1]# 
[root@rhgs313-6 fuse1]# 
[root@rhgs313-6 fuse1]# mkdir -p d0/d1/d2
[root@rhgs313-6 fuse1]# ll -lR
total 4
drwxr-xr-x. 3 root root 4096 Apr 17 14:49 d0

total 4
drwxr-xr-x. 3 root root 4096 Apr 17 14:49 d1

total 4
drwxr-xr-x. 2 root root 4096 Apr 17 14:49
[root@rhgs313-6 fuse1]# gluster v add-brick dlock server1:/bricks/brick1/deadlock-{6..7} force
volume add-brick: success

GDB into process, set breakpoint etc etc

[root@rhgs313-6 fuse1]# mv d0/d1 d0/d1_a

Once gdb breaks at the breakpoint,
[root@rhgs313-6 brick1]# cd /mnt/fuse2/
[root@rhgs313-6 fuse2]# ll
total 4
drwxr-xr-x. 3 root root 4096 Apr 17 14:49 d0
[root@rhgs313-6 fuse2]# setfattr -n "distribute.fix.layout" -v "1" d0

This will hang. Allow gdb to continue. /mnt/fuse1 will also hang.

Comment 1 Worker Ant 2018-04-17 10:12:59 UTC
REVIEW: https://review.gluster.org/19886 (cluster/dht: Fix dht_rename lock order) posted (#1) for review on master by N Balachandran

Comment 2 Worker Ant 2018-04-23 01:43:39 UTC
COMMIT: https://review.gluster.org/19886 committed in master by "Raghavendra G" <rgowdapp@redhat.com> with a commit message- cluster/dht: Fix dht_rename lock order

Fixed dht_order_rename_lock to use the same inodelk ordering
as that of the dht selfheal locks (dictionary order of
lock subvolumes).

Change-Id: Ia3f8353b33ea2fd3bc1ba7e8e777dda6c1d33e0d
fixes: bz#1568348
Signed-off-by: N Balachandran <nbalacha@redhat.com>

Comment 3 Shyamsundar 2018-06-20 18:04:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/