Bug 1201203
Summary: | glusterd OOM killed, when repeating volume set operation in a loop | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> |
Component: | core | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | bugs, gluster-bugs, jbyers, nlevinki, rhs-bugs, sasundar, vbellur |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | core | ||
Fixed In Version: | glusterfs-3.7.0 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | 1198968 | Environment: | |
Last Closed: | 2015-05-14 17:29:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1198968 |
Description
Atin Mukherjee
2015-03-12 10:11:20 UTC
REVIEW: http://review.gluster.org/9862 (core : free up mem_acct.rec in xlator_members_free) posted (#1) for review on master by Atin Mukherjee (amukherj) REVIEW: http://review.gluster.org/9862 (core : free up mem_acct.rec in xlator_members_free) posted (#2) for review on master by Atin Mukherjee (amukherj) REVIEW: http://review.gluster.org/9862 (core : free up mem_acct.rec in xlator_destroy) posted (#3) for review on master by Atin Mukherjee (amukherj) COMMIT: http://review.gluster.org/9862 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit 4481b03ae2e8ebbd091b0436b96e97707b4ec41f Author: Atin Mukherjee <amukherj> Date: Thu Mar 12 15:43:56 2015 +0530 core : free up mem_acct.rec in xlator_destroy Problem: We've observed that glusterd was OOM killed after some minutes when volume set command was run in a loop. Analysis: Initially the suspection was in glusterd code, but a deep dive into the codebase revealed that while validating all the options as part of graph reconfiguration at the time of freeing up the xlator object its one of the member mem_acct is left over which causes memory leak. Solution: Free up xlator's mem_acct.rec in xlator_destroy () Change-Id: Ie9e7267e1ac4ab7b8af6e4d7c6660dfe99b4d641 BUG: 1201203 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/9862 Reviewed-by: Niels de Vos <ndevos> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Krishnan Parthasarathi <kparthas> Reviewed-by: Raghavendra Bhat <raghavendra> Reviewed-by: Kaleb KEITHLEY <kkeithle> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |