Bug 1523050
| Summary: | glusterd consuming high memory | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> |
| Component: | glusterd | Assignee: | bugs <bugs> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | unspecified | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.10 | CC: | abhishku, amukherj, bkunal, bmekala, bugs, jahernan, moagrawa, pdhange, psony, rhs-bugs, sbairagy, storage-qa-internal, vbellur |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.10.9 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1522775 | Environment: | |
| Last Closed: | 2018-01-08 16:03:38 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1522775 | ||
| Bug Blocks: | 1512470, 1523046, 1523048, 1526365 | ||
|
Comment 1
Worker Ant
2017-12-07 05:02:29 UTC
COMMIT: https://review.gluster.org/18980 committed in release-3.10 by \"Atin Mukherjee\" <amukherj> with a commit message- glusterd: Free up svc->conn on volume delete Daemons like snapd, tierd and gfproxyd are maintained on per volume basis and on a volume delete we should destroy the rpc connection established for them. >mainline patch : https://review.gluster.org/#/c/18957 Change-Id: Id1440e39da07b990fdb9b207df18da04b1ca8014 BUG: 1523050 Signed-off-by: Atin Mukherjee <amukherj> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.9, please open a new bug report. glusterfs-3.10.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-January/000088.html [2] https://www.gluster.org/pipermail/gluster-users/ |