Bug 1652118
Summary: | default cluster.max-bricks-per-process to 250 | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> | |
Component: | glusterd | Assignee: | bugs <bugs> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | bugs | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-6.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1653073 (view as bug list) | Environment: | ||
Last Closed: | 2019-03-25 16:32:04 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1653073, 1653136 |
Description
Atin Mukherjee
2018-11-21 15:48:16 UTC
REVIEW: https://review.gluster.org/21701 (glusterd: make max-bricks-per-process default value to 250) posted (#1) for review on master by Atin Mukherjee REVIEW: https://review.gluster.org/21701 (glusterd: make max-bricks-per-process default value to 250) posted (#2) for review on master by Atin Mukherjee REVIEW: https://review.gluster.org/21797 (glusterd: set cluster.max-bricks-per-process to 250) posted (#1) for review on master by Atin Mukherjee REVIEW: https://review.gluster.org/21797 (glusterd: set cluster.max-bricks-per-process to 250) posted (#2) for review on master by Atin Mukherjee This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ |