Bug 1381830 - Regression caused by enabling client-io-threads by default
Summary: Regression caused by enabling client-io-threads by default
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: io-threads
Version: mainline
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1387894 1412941
TreeView+ depends on / blocked
 
Reported: 2016-10-05 07:25 UTC by Soumya Koduri
Modified: 2017-03-06 17:29 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.10.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1387894 1412941 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:29:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Soumya Koduri 2016-10-05 07:25:53 UTC
Description of problem:

As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1380619#c11,   there is a regression caused to gfapi applications with making client-io-threads option on by default. 
    
iot-worker threads spawned are not cleaned up as part of xlator->fini() and they could end up accessing invalid/freed memory.

we need to fix io-thread->fini() to cleanup those threads before exiting. Since it could be an intricate fix, we could try disabling io-threads by default till then.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2016-10-05 09:35:56 UTC
REVIEW: http://review.gluster.org/15616 (Revert "mgmt/glusterd: Enable client-io-threads by default") posted (#2) for review on master by soumya k (skoduri)

Comment 2 Worker Ant 2016-10-09 16:18:41 UTC
REVIEW: http://review.gluster.org/15620 (performance/io-threads: Exit all threads on PARENT_DOWN) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Worker Ant 2016-10-21 11:38:04 UTC
REVIEW: http://review.gluster.org/15620 (performance/io-threads: Exit all threads on PARENT_DOWN) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Worker Ant 2016-10-22 07:59:06 UTC
REVIEW: http://review.gluster.org/15620 (performance/io-threads: Exit all threads on PARENT_DOWN) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 5 Worker Ant 2016-10-23 09:32:10 UTC
COMMIT: http://review.gluster.org/15620 committed in master by Raghavendra G (rgowdapp) 
------
commit d7a5ca16911caca03cec1112d4be56a9cda2ee30
Author: Pranith Kumar K <pkarampu>
Date:   Sun Oct 9 21:36:40 2016 +0530

    performance/io-threads: Exit all threads on PARENT_DOWN
    
    Problem:
    When glfs_fini() is called on a volume where client.io-threads is enabled,
    fini() will free up iothread xl's private structure but there would be some
    threads that are sleeping which would wakeup after the timedwait completes
    leading to accessing already free'd memory.
    
    Fix:
    As part of parent-down, exit all sleeping threads.
    
    BUG: 1381830
    Change-Id: I0bb8d90241112c355fb22ee3fbfd7307f475b339
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/15620
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 6 Shyamsundar 2017-03-06 17:29:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.