Bug 1519076 - glusterfs client crash when removing directories
Summary: glusterfs client crash when removing directories
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Nithya Balachandran
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On: 1490642
Blocks: 1503137 1505221
TreeView+ depends on / blocked
 
Reported: 2017-11-30 04:32 UTC by Nithya Balachandran
Modified: 2018-09-17 11:12 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.12.2-2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1490642
Environment:
Last Closed: 2018-09-04 06:39:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:41:33 UTC

Description Nithya Balachandran 2017-11-30 04:32:23 UTC
+++ This bug was initially created as a clone of Bug #1490642 +++

Description of problem:
Glusterfs client crashes when performing removing of directories in parallel. This issue is found by LTP test case inode02. Glusterfs needs to configure with more then 1 bricks. It is much more easy to reproduce with commit "event/epoll: Add back socket for polling of events immediately after reading the entire rpc message from the wire".

Version-Release number of selected component (if applicable):
mainline

How reproducible:
On some test machine, it crashes every time. However on some other machine, it never crashes.

Steps to Reproduce:
1. create glusterfs with >1 bricks
2. fuse mount glusterfs
3. run ltp test case inode02

Actual results:
gluster crashes when removing directories and test fails

Expected results:
test finishes without error.

Additional info:

--- Additional comment from Nithya Balachandran on 2017-09-20 10:19:47 EDT ---

Can you please provide the coredump and rpm versions?

--- Additional comment from Zhang Huan on 2017-09-21 22:14:17 EDT ---

I've a fix for this issue. Since I could not login to review.gluster.org, I put the link of it below FYI.
https://github.com/zhanghuan/glusterfs-1/commit/cd383bc1f49975fae769bed1cbd67e3b0a309819

--- Additional comment from Nithya Balachandran on 2017-09-22 00:29:55 EDT ---

Thank you Zhang for finding the BZ and the fix. 

Are you able to log in to review.gluster.org now? It would be great if you can submit the patch there.

Regards,
Nithya

--- Additional comment from Zhang Huan on 2017-09-22 00:40:42 EDT ---

No, "signed in with GitHub" still gives me a result of forbidden. It is been for a while.

I saw your comment on my patch. It is good advise, I will modify the patch accordingly and resent after test.

Thank you for your reply.

--- Additional comment from Nithya Balachandran on 2017-09-22 01:58:17 EDT ---

(In reply to Zhang Huan from comment #4)
> No, "signed in with GitHub" still gives me a result of forbidden. It is been
> for a while.
> 
You can ask for help on this by logging into the #gluster channel in IRC. Ask for nigelb.

--- Additional comment from Nithya Balachandran on 2017-10-11 04:12:21 EDT ---

Hi Zhang,

Please file a bug for the issue where you cannot log into review.gluster.org. Please use component project-infrastructure.

Thanks,
Nithya

--- Additional comment from Nithya Balachandran on 2017-10-11 04:16:22 EDT ---

(In reply to Nithya Balachandran from comment #6)
> Hi Zhang,
> 
> Please file a bug for the issue where you cannot log into
> review.gluster.org. Please use component project-infrastructure.
> 
> Thanks,
> Nithya

Please ignore this - I just realised there is a BZ already.

--- Additional comment from Zhang Huan on 2017-10-12 02:03:15 EDT ---

The login issue has been fixed. Related link is 
https://bugzilla.redhat.com/show_bug.cgi?id=1494363

I will continue to post the patch to review.gluster.org for review.

--- Additional comment from Worker Ant on 2017-10-13 01:53:31 EDT ---

REVIEW: https://review.gluster.org/18517 (cluster/dht: fix crash when deleting directories) posted (#1) for review on master by Zhang Huan (zhanghuan)

--- Additional comment from Worker Ant on 2017-10-16 06:33:17 EDT ---

COMMIT: https://review.gluster.org/18517 committed in master by Raghavendra G (rgowdapp) 
------
commit 206120126d455417a81a48ae473d49be337e9463
Author: Zhang Huan <zhanghuan>
Date:   Tue Sep 5 11:36:25 2017 +0800

    cluster/dht: fix crash when deleting directories
    
    In DHT, after locks on all subvolumes are acquired, it would perform the
    following steps sequentially,
    1. send remove dir on all other subvolumes except the hashed one in a loop;
    2. wait for all pending rmdir to be done
    3. remove dir on the hashed subvolume
    
    The problem is that in step 1 there is a check to skip hashed subvolume
    in the loop. If the last subvolume to check is actually the
    hashed one, and step 3 is quickly done before the last and hashed
    subvolume is checked, by accessing shared context data be destroyed in
    step 3, would cause a crash.
    
    Fix by saving shared data in a local variable to access later in the
    loop.
    
    Change-Id: I8db7cf7cb262d74efcb58eb00f02ea37df4be4e2
    BUG: 1490642
    Signed-off-by: Zhang Huan <zhanghuan>

--- Additional comment from Worker Ant on 2017-10-23 00:50:32 EDT ---

REVIEW: https://review.gluster.org/18551 (cluster/dht: fix crash when deleting directories) posted (#1) for review on release-3.12 by N Balachandran (nbalacha)

Comment 5 Prasad Desala 2018-04-19 13:19:53 UTC
Verified this BZ on glusterfs version 3.12.2-7.el7rhgs.x86_64.

Ran LTP test case inode02 for 6 times and didn't hit any crashes.

Executing /opt/qa/tools/ltp-full-20180118/testcases/kernel/fs//inode/inode02

real	0m0.946s
user	0m0.134s
sys	0m1.274s

inode02     1  TPASS  :  Test passed

Hence, moving this BZ to Verified.

Comment 7 errata-xmlrpc 2018-09-04 06:39:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.