Bug 1566067
Summary: | Volume status inode is broken with brickmux | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | hari gowtham <hgowtham> | |
Component: | glusterd | Assignee: | hari gowtham <hgowtham> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | amukherj, bugs, moagrawa, rhinduja, rhs-bugs, rmadaka, storage-qa-internal, vbellur | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.15 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | 1559452 | |||
: | 1569336 1569346 (view as bug list) | Environment: | ||
Last Closed: | 2018-06-20 18:03:42 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1559452, 1569336, 1569346 |
Comment 1
hari gowtham
2018-04-11 13:03:08 UTC
REVIEW: https://review.gluster.org/19846 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on master by hari gowtham COMMIT: https://review.gluster.org/19846 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: volume inode/fd status broken with brick mux Problem: The values for inode/fd was populated from the ctx received from the server xlator. Without brickmux, every brick from a volume belonged to a single brick from the volume. So searching the server and populating it worked. With brickmux, a number of bricks can be confined to a single process. These bricks can be from different volumes too (if we use the max-bricks-per-process option). If they are from different volumes, using the server xlator to populate causes problem. Fix: Use the brick to validate and populate the inode/fd status. Signed-off-by: hari gowtham <hgowtham> Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd fixes: bz#1566067 REVIEW: https://review.gluster.org/19903 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on release-3.12 by hari gowtham REVIEW: https://review.gluster.org/19904 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on release-4.0 by hari gowtham This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/ This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.15, please open a new bug report. glusterfs-3.12.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000114.html [2] https://www.gluster.org/pipermail/gluster-users/ |