Bug 1412922 - ls and move hung on disperse volume
Summary: ls and move hung on disperse volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 3.8
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1400093 1402710
Blocks: 1404572
TreeView+ depends on / blocked
 
Reported: 2017-01-13 06:51 UTC by Pranith Kumar K
Modified: 2017-02-20 12:34 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8.9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1402710
Environment:
Last Closed: 2017-02-20 12:34:24 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Pranith Kumar K 2017-01-13 07:00:28 UTC
This bug is easily reproducible on plain dispersed volume too without killing any brick.

[root@apandey gluster]# gvi
 
Volume Name: vol
Type: Disperse
Volume ID: c3e903e0-e7b5-42a3-9e75-798c4e3268a0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: apandey:/brick/gluster/vol-1
Brick2: apandey:/brick/gluster/vol-2
Brick3: apandey:/brick/gluster/vol-3
Brick4: apandey:/brick/gluster/vol-4
Brick5: apandey:/brick/gluster/vol-5
Brick6: apandey:/brick/gluster/vol-6
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on


1 - Mount this volume on 3 mount point, c1, c2, and c3
2 - on c1 - mkdir /c1/dir ; cd dir
3 - on c2 touch 4000 files on mount point i.e. "/"
4 - After step 3, start touching next 4000 files on c2 on mount point i.e. "/" 
3 - on c1 start touching 10000 files on /dir/.
4 - on c3 start moving 4000 files created on step 3 from mount point to /dir/
5 - on c3, from different console , start ls in a loop.

All the client will be hanged after sometime (within 3-4 min).

Comment 2 Worker Ant 2017-01-13 07:12:56 UTC
REVIEW: http://review.gluster.org/16394 (cluster/ec: Fix lk-owner set race in ec_unlock) posted (#1) for review on release-3.8 by Pranith Kumar Karampuri (pkarampu)

Comment 3 Worker Ant 2017-01-17 15:55:44 UTC
COMMIT: http://review.gluster.org/16394 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) 
------
commit d724b4213902b694ec13a48bc4d55d538986787f
Author: Pranith Kumar K <pkarampu>
Date:   Thu Dec 8 14:53:04 2016 +0530

    cluster/ec: Fix lk-owner set race in ec_unlock
    
    Problem:
    Rename does two locks. There is a case where when it tries to unlock it sends
    xattrop of the directory with new version, callback of these two xattrops can
    be picked up by two separate epoll threads. Both of them will try to set the
    lk-owner for unlock in parallel on the same frame so one of these unlocks will
    fail because the lk-owner doesn't match.
    
    Fix:
    Specify the lk-owner which will be set on inodelk frame which will not be over
    written by any other thread/operation.
    
     >BUG: 1402710
     >Change-Id: I666ffc931440dc5253d72df666efe0ef1d73f99a
     >Signed-off-by: Pranith Kumar K <pkarampu>
     >Reviewed-on: http://review.gluster.org/16074
     >Reviewed-by: Xavier Hernandez <xhernandez>
     >Smoke: Gluster Build System <jenkins.org>
     >NetBSD-regression: NetBSD Build System <jenkins.org>
     >CentOS-regression: Gluster Build System <jenkins.org>
    
    BUG: 1412922
    Change-Id: I18c553adaa0cbc8df55876accc1e1dcd4ee9c116
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/16394
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Xavier Hernandez <xhernandez>
    CentOS-regression: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 4 Niels de Vos 2017-02-20 12:34:24 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.9, please open a new bug report.

glusterfs-3.8.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2017-February/000066.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.