Bug 1558959 - [brick-mux] incorrect event-thread scaling in server_reconfigure()
Summary: [brick-mux] incorrect event-thread scaling in server_reconfigure()
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Milind Changire
QA Contact:
URL:
Whiteboard:
Depends On: 1547888
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-21 12:13 UTC by Milind Changire
Modified: 2018-05-07 15:15 UTC (History)
4 users (show)

Fixed In Version: glusterfs-4.0.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1547888
Environment:
Last Closed: 2018-05-07 15:15:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Milind Changire 2018-03-21 12:13:38 UTC
+++ This bug was initially created as a clone of Bug #1547888 +++

Description of problem:
Whenever BRICK_ATTACH or BRICK_TERMINATE management operation is triggered on the brick, glusterfs_autoscale_threads() scales the event-threads correctly. However, the subsequent xlator_reconfigure()s that are invoked pass a default value of "event-threads" option and value to server_reconfigure() causing a rescaling of event-threads to 2. This is because the value of "event-threads" option is always 1, presuming that the "server.event-threads" wasn't set to a different value.

eg.
In case of brick-mux, event-threads are scaled by +1 for BRICK_ATTACH and scaled by -1 for BRICK_TERMINATE. So starting with 5 bricks, if 5 more bricks are added in a single operation, the brick receives 5 BRICK_ATTACH messages causing the event-threads to be incrementally scaled to 10. However, the xlator reconfigures that get invoked causes the system default value for "event-thread" option to be used causing the scaling of event-threads back to 2.

This may cause an unwanted fop throttling at the bricks.
Care needs to be taken so that the event thread count is not reset after rescaling at BRICK_ATTACH/BRICK_TERMINATE management op.

--- Additional comment from Worker Ant on 2018-03-09 16:21:48 IST ---

REVIEW: https://review.gluster.org/19689 (rpcsvc: correct event-thread scaling) posted (#1) for review on master by Milind Changire

--- Additional comment from Worker Ant on 2018-03-12 14:19:13 IST ---

COMMIT: https://review.gluster.org/19689 committed in master by "Raghavendra G" <rgowdapp> with a commit message- rpcsvc: correct event-thread scaling

Problem:
Auto thread count derived from the number of attachs and detachs
was reset to 1 when server_reconfigure() was called.

Solution:
Avoid auto-thread-count reset to 1.

Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e
BUG: 1547888
Signed-off-by: Milind Changire <mchangir>

Comment 1 Worker Ant 2018-03-21 12:18:23 UTC
REVIEW: https://review.gluster.org/19751 (rpcsvc: correct event-thread scaling) posted (#1) for review on release-4.0 by Milind Changire

Comment 2 Worker Ant 2018-03-22 18:48:37 UTC
COMMIT: https://review.gluster.org/19751 committed in release-4.0 by "Shyamsundar Ranganathan" <srangana> with a commit message- rpcsvc: correct event-thread scaling

Problem:
Auto thread count derived from the number of attachs and detachs
was reset to 1 when server_reconfigure() was called.

Solution:
Avoid auto-thread-count reset to 1.

mainline:
> BUG: 1547888
> Reviewed-on: https://review.gluster.org/19689
> Reviewed-by: Raghavendra G <rgowdapp>
> Signed-off-by: Milind Changire <mchangir>
(cherry picked from commit 0c3d984287d91d3fe1ffeef297252d912c08a410)

Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e
BUG: 1558959
Signed-off-by: Milind Changire <mchangir>

Comment 3 Shyamsundar 2018-05-07 15:15:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.2, please open a new bug report.

glusterfs-4.0.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-April/000097.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.