Bug 1547888
Summary: | [brick-mux] incorrect event-thread scaling in server_reconfigure() | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Milind Changire <mchangir> | |
Component: | core | Assignee: | Milind Changire <mchangir> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | amukherj, bugs, moagrawa, rgowdapp | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-v4.1.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1554255 1558959 (view as bug list) | Environment: | ||
Last Closed: | 2018-06-20 18:01:20 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1554255, 1558959 |
Description
Milind Changire
2018-02-22 08:36:30 UTC
REVIEW: https://review.gluster.org/19689 (rpcsvc: correct event-thread scaling) posted (#1) for review on master by Milind Changire COMMIT: https://review.gluster.org/19689 committed in master by "Raghavendra G" <rgowdapp> with a commit message- rpcsvc: correct event-thread scaling Problem: Auto thread count derived from the number of attachs and detachs was reset to 1 when server_reconfigure() was called. Solution: Avoid auto-thread-count reset to 1. Change-Id: Ic00e86adb81ba3c828e354a6ccb638209ae58b3e BUG: 1547888 Signed-off-by: Milind Changire <mchangir> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/ |