Bug 1498570

Summary: client-io-threads option not working for replicated volumes
Product: [Community] GlusterFS Reporter: Ravishankar N <ravishankar>
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: mainlineCC: amukherj, bugs, mpillai, nchilaka, rcyriac, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.13.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1487495
: 1499158 (view as bug list) Environment:
Last Closed: 2017-12-08 17:42:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1487495    
Bug Blocks: 1499158    

Description Ravishankar N 2017-10-04 16:12:41 UTC
+++ This bug was initially created as a clone of Bug #1487495 +++

Description of problem:
The option "performance.client-io-threads on" is needed in some scenarios where the fuse thread becomes a bottleneck. But this option seems to be disabled for replicated volumes i.e. setting the option on via the "gluster v set ..." command has no effect.

It is fine to set the option to off by default for replicated volumes. But we need the option of turning it on in scenarios where we see that the fuse thread is bottlenecked.

Version-Release number of selected component (if applicable):

glusterfs-libs-3.8.4-43.el7rhgs.x86_64
glusterfs-3.8.4-43.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-43.el7rhgs.x86_64
glusterfs-fuse-3.8.4-43.el7rhgs.x86_64

kernel-3.10.0-693.el7.x86_64


How reproducible:
always

Comment 1 Worker Ant 2017-10-04 16:15:53 UTC
REVIEW: https://review.gluster.org/18430 (glusterd : fixes for client io-threads for replicate volumes) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Worker Ant 2017-10-05 01:41:34 UTC
REVIEW: https://review.gluster.org/18430 (glusterd : fixes for client io-threads for replicate volumes) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 3 Worker Ant 2017-10-06 05:18:00 UTC
REVIEW: https://review.gluster.org/18430 (glusterd : fix client io-threads option for replicate volumes) posted (#3) for review on master by Ravishankar N (ravishankar)

Comment 4 Worker Ant 2017-10-06 09:33:38 UTC
REVIEW: https://review.gluster.org/18430 (glusterd : fix client io-threads option for replicate volumes) posted (#4) for review on master by Ravishankar N (ravishankar)

Comment 5 Worker Ant 2017-10-09 06:49:24 UTC
REVIEW: https://review.gluster.org/18430 (glusterd : fix client io-threads option for replicate volumes) posted (#5) for review on master by Ravishankar N (ravishankar)

Comment 6 Worker Ant 2017-10-09 11:57:12 UTC
COMMIT: https://review.gluster.org/18430 committed in master by Atin Mukherjee (amukherj) 
------
commit 452b9124f452d6c73f72da577a98f17502b1ed2c
Author: Ravishankar N <ravishankar>
Date:   Tue Oct 3 18:41:11 2017 +0530

    glusterd : fix client io-threads option for replicate volumes
    
    Problem:
    Commit ff075a3d6f9b142911d25c27fd209838782bfff0 disabled loading
    client-io-threads for replicate volumes (it was set to on by default in
    commit e068c1997314046658dd502e9118dab32decf879) due to performance
    issues but in doing so, inadvertently failed to load the xlator even if
    the user explicitly enabled the option using the volume set command.
    This was despite returning returning sucess for the volume set.
    
    Fix:
    Modify the check in perfxl_option_handler() and add checks in volume
    create/add-brick/remove-brick code paths, tying it all to
    GD_OP_VERSION_3_12_2.
    
    Change-Id: Ib612973a999a7da818cc926f5c2601b1f0794fcf
    BUG: 1498570
    Signed-off-by: Ravishankar N <ravishankar>

Comment 7 Shyamsundar 2017-12-08 17:42:08 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/