Description of problem: DHT crashed when a directory was renamed from "xyz" to ".xyz" in cifs mount and then in windows the same directory "xyz" was set to hidden i.e by right click, properties and check the hidden check box.Again in server cifs mount rename back the ".xyz" to "xyz" and in windows just double clicked the "xyz" directory and was unable to access the share in windows. Version-Release number of selected component (if applicable): glusterfs-3.7.9-9.el7rhgs.x86_64 samba-client-4.4.3-7.el7rhgs.x86_64 How reproducible: 1/1 Steps to Reproduce: 1.In an existing setup with Distributed-Replicate volume with samba-ctdb setup and VSS plugins 2.Do a cifs mount and mount the share in windows8 as well 3.Create files,directories in the share. 3.Play a video file (not necessarily from the same directory which will be renamed in the below steps) 4.goto cifs mount rename any directory containing files say "xyz" to ".xyz" 5.Goto windows share right click on "xyz" properties and check the Hidden checkbox. 6.Goto the cifs mount location and rename again ".xyz" to "xyz" 7.Double click on "xyz" Actual results: Share was not accessible.The video which was being played crashed in mid way. Expected results: Should not be any issue Additional info: -------------------------------BT--------------------------------- (gdb) bt #0 0x00007f61e3e335f7 in raise () from /lib64/libc.so.6 #1 0x00007f61e3e34ce8 in abort () from /lib64/libc.so.6 #2 0x00007f61e5794beb in dump_core () at ../source3/lib/dumpcore.c:322 #3 0x00007f61e5787fe7 in smb_panic_s3 (why=<optimized out>) at ../source3/lib/util.c:814 #4 0x00007f61e7c7957f in smb_panic (why=why@entry=0x7f61e7cc054a "internal error") at ../lib/util/fault.c:166 #5 0x00007f61e7c79796 in fault_report (sig=<optimized out>) at ../lib/util/fault.c:83 #6 sig_fault (sig=<optimized out>) at ../lib/util/fault.c:94 #7 <signal handler called> #8 0x0000000000000000 in ?? () #9 0x00007f61c547e136 in dht_selfheal_dir_finish (frame=frame@entry=0x7f61c7f8987c, this=this@entry=0x7f61b800dc10, ret=ret@entry=0, invoke_cbk=invoke_cbk@entry=1) at dht-selfheal.c:121 #10 0x00007f61c5482d2f in dht_selfheal_directory (frame=frame@entry=0x7f61c7f8987c, dir_cbk=dir_cbk@entry=0x7f61c54938c0 <dht_lookup_selfheal_cbk>, loc=loc@entry=0x7f61c40641e0, layout=layout@entry=0x7f61a8000990) at dht-selfheal.c:2125 #11 0x00007f61c5499563 in dht_lookup_dir_cbk (frame=0x7f61c7f8987c, cookie=<optimized out>, this=0x7f61b800dc10, op_ret=<optimized out>, op_errno=0, inode=0x7f61bd56c51c, stbuf=0x7f61a8002510, xattr=0x7f61c79a4484, postparent=0x7f61a8002580) at dht-common.c:737 #12 0x00007f61c57390d3 in afr_lookup_done (frame=frame@entry=0x7f61c7f8a494, this=this@entry=0x7f61b800bae0) at afr-common.c:1825 #13 0x00007f61c5739734 in afr_lookup_sh_metadata_wrap (opaque=0x7f61c7f8a494) at afr-common.c:1989 #14 0x00007f61cc5f7262 in synctask_wrap (old_task=<optimized out>) at syncop.c:380 #15 0x00007f61e3e45110 in ?? () from /lib64/libc.so.6 #16 0x0000000000000000 in ?? () (gdb) f 9 #9 0x00007f61c547e136 in dht_selfheal_dir_finish (frame=frame@entry=0x7f61c7f8987c, this=this@entry=0x7f61b800dc10, ret=ret@entry=0, invoke_cbk=invoke_cbk@entry=1) at dht-selfheal.c:121 121 local->selfheal.dir_cbk (frame, NULL, frame->this, ret, ------------------CLient log-------------------------------- [2016-06-13 04:35:35.819883] W [MSGID: 101182] [inode.c:174:__foreach_ancestor_dentry] 0-DOG-dht: per dentry fn returned 1 [2016-06-13 04:35:35.819907] C [MSGID: 101184] [inode.c:228:__is_dentry_cyclic] 0-meta-autoload/inode: detected cyclic loop formation during inode linkage. inode (408e7032-cc0f-479f-a450-b17302802adf) linking under itself as .samba2 [2016-06-13 04:35:35.820427] W [MSGID: 109005] [dht-selfheal.c:2064:dht_selfheal_directory] 0-DOG-dht: linking inode failed (408e7032-cc0f-479f-a450-b17302802adf/.samba2) => 408e7032-cc0f-479f-a450-b17302802adf
There are two issues here: 1. inode-link failure 2. crash after an inode-link is failed. 2 is a regression introduced during 3.1.3 release period. Please file a different bug to track 1
Tried reproducing the same with a fresh setup but this is not reproducible with the above mentioned steps.
Regression introduced in 3.1.3
Upstream mainline : http://review.gluster.org/14707 Upstream 3.8 : http://review.gluster.org/15157 And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.
Verified with the Steps to Reproduce as mentioned above with the below version samba-client-4.4.6-2.el7rhgs.x86_64 glusterfs-cli-3.8.4-3.el7rhgs.x86_64 No crash was seen hence marking it as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html