Bug 1338668

Summary: AFR : fuse,nfs mount hangs when directories with same names are created and deleted continuously
Product: [Community] GlusterFS Reporter: Sakshi <sabansal>
Component: distributeAssignee: Sakshi <sabansal>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.7.11CC: chorn, kramdoss, nbalacha, ndevos, pcuzner, pkarampu, rcyriac, rgowdapp, rhinduja, rhs-bugs, sabansal, sankarshan, smohan, spalai, spandura, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: triaged
Fixed In Version: glusterfs-3.7.12 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1338634 Environment:
Last Closed: 2016-06-28 12:18:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 951195, 986916, 1115367, 1262680, 1266834, 1266836, 1286582, 1338634, 1338669    
Bug Blocks: 1121920, 1299184    

Description Sakshi 2016-05-23 08:34:16 UTC
+++ This bug was initially created as a clone of Bug #1338634 +++

+++ This bug was initially created as a clone of Bug #1286582 +++

+++ This bug was initially created as a clone of Bug #986916 +++

Description of problem:
=========================
In a distribute-replicate volume, when directories with same names are created and deleted continuously on fuse and nfs mount points, after certain time the mount points hang. 



How reproducible:

test_bug_922792.sh
===================
#!/bin/bash

dir=$(dirname $(readlink -f $0))
echo 'Script in '$dir
while :
do
        mkdir -p foo$1/bar/gee
        mkdir -p foo$1/bar/gne
        mkdir -p foo$1/lna/gme
        rm -rf foo$1
done

Steps to Reproduce:
===================
1. Create a distribute-replicate volume ( 6 x 2 ). 4 storage nodes . 3 bricks on each storage node. 

2. Start the volume.

3. Create 2 fuse and 2 nfs mounts on each RHEL5.9 and RHEL6.4 clients. 

4. From all the mount points execute: "test_bug_922792.sh" 

Actual results:
===============
After sometime, fuse and nfs mount hangs. 

Expected results:
================
Fuse and nfs mount shouldn't hang.

--- Additional comment from Vijay Bellur on 2016-05-23 04:32:17 EDT ---

REVIEW: http://review.gluster.org/14496 (dht: selfheal should wind mkdir call to subvols with ESTALE error) posted (#1) for review on master by Sakshi Bansal

Comment 1 Vijay Bellur 2016-05-23 08:39:20 UTC
REVIEW: http://review.gluster.org/14497 (dht: selfheal should wind mkdir call to subvols with ESTALE error) posted (#1) for review on release-3.7 by Sakshi Bansal

Comment 2 Vijay Bellur 2016-05-27 03:37:31 UTC
COMMIT: http://review.gluster.org/14497 committed in release-3.7 by Raghavendra G (rgowdapp@redhat.com) 
------
commit 4c9ec0c30cd895095419332510292ce530f24fdb
Author: Sakshi Bansal <sabansal@redhat.com>
Date:   Fri May 20 15:16:17 2016 +0530

    dht: selfheal should wind mkdir call to subvols with ESTALE error
    
    Backport of http://review.gluster.org/#/c/14496/
    
    > Change-Id: I7140e50263b5f28b900829592c664fa1d79f3f99
    > BUG: 1338634
    > Signed-off-by: Sakshi Bansal <sabansal@redhat.com>
    
    Change-Id: I7140e50263b5f28b900829592c664fa1d79f3f99
    BUG: 1338668
    Signed-off-by: Sakshi Bansal <sabansal@redhat.com>
    Reviewed-on: http://review.gluster.org/14497
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>

Comment 3 Kaushal 2016-06-28 12:18:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user