Bug 1229282
| Summary: | Disperse volume: Huge memory leak of glusterfsd process | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Vijaikumar Mallikarjuna <vmallika> |
| Component: | quota | Assignee: | Vijaikumar Mallikarjuna <vmallika> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | urgent | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.7.0 | CC: | amukherj, bugs, byarlaga, gluster-bugs, jahernan, jbyers, nsathyan, pkarampu, rkavunga, smohan, vmallika |
| Target Milestone: | --- | Keywords: | Reopened, Triaged |
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.7.3 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1207735 | Environment: | |
| Last Closed: | 2015-07-30 09:50:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1207735, 1259697 | ||
| Bug Blocks: | 1186580, 1224177, 1233025 | ||
|
Description
Vijaikumar Mallikarjuna
2015-06-08 11:12:17 UTC
REVIEW: http://review.gluster.org/11124 (features/quota: Fix ref-leak) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11124 (features/quota: Fix ref-leak) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) COMMIT: http://review.gluster.org/11124 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit bc743c012aca8b5854baf1b71a9ec9591c378645 Author: Pranith Kumar K <pkarampu> Date: Tue Jun 2 17:58:00 2015 +0530 features/quota: Fix ref-leak This is a backport of http://review.gluster.org/#/c/11045 > Change-Id: I0b44b70f07be441e044d9dfc5c2b64bd5b4cac18 > BUG: 1207735 > Signed-off-by: Pranith Kumar K <pkarampu> > Reviewed-on: http://review.gluster.org/11045 > Tested-by: Gluster Build System <jenkins.com> > Reviewed-by: Raghavendra G <rgowdapp> > Tested-by: Raghavendra G <rgowdapp> > Signed-off-by: vmallika <vmallika> Change-Id: Id740d74fb5cf7a9b23027dbbb0a9f42616dcf2fc BUG: 1229282 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/11124 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp> Patch submitted: http://review.gluster.org/#/c/11321/ REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#3) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report. glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#5) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#6) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) Memory leak is still seen with the glusterfs-3.7.2, so reopening the bug REVIEW: http://review.gluster.org/11401 (quota/marker: fix mem-leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11401 (quota/marker: fix mem-leak in marker) posted (#4) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) COMMIT: http://review.gluster.org/11401 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit 08586ee518de438fe2bbbaa74ae4c9a02a5d88cf Author: vmallika <vmallika> Date: Wed Jun 24 11:56:30 2015 +0530 quota/marker: fix mem-leak in marker This is a backport of http://review.gluster.org/#/c/11361/ > When removing contribution xattr, we also need to free > contribution node in memory > Use ref/unref mechanism to handle contribution node memory > > local->xdata should be freed in mq_local_unref > > There is another huge memory consumption happens > in function mq_inspect_directory_xattr_task > where dirty flag is not set > > Change-Id: Ieca3ab4bf410c51259560e778bce4e81b9d888bf > BUG: 1207735 > Signed-off-by: vmallika <vmallika> > Reviewed-on: http://review.gluster.org/11361 > Reviewed-by: Krishnan Parthasarathi <kparthas> > Tested-by: NetBSD Build System <jenkins.org> > Reviewed-by: Raghavendra G <rgowdapp> > Tested-by: Raghavendra G <rgowdapp> Change-Id: I3038b41307f30867fa728054469ba917fd625e95 BUG: 1229282 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/11401 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp> Mem leak is still seen, so reopening bug REVIEW: http://review.gluster.org/11527 (quota/marker: fix mem leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11528 (posix: fix mem-leak in posix_get_ancestry error path) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11529 (quota: fix mem leak in quota enforcer) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) COMMIT: http://review.gluster.org/11529 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit 74a143100fa4b9532d37bed39504dcea9d371d18 Author: vmallika <vmallika> Date: Fri Jul 3 17:32:04 2015 +0530 quota: fix mem leak in quota enforcer This is a backport of review.gluster.org/#/c/11526/ Do inode_unref on parent > Change-Id: I21d82eb8716dd73aa2dc291b3ae8506e4fb4ea8b > BUG: 1207735 > Signed-off-by: vmallika <vmallika> Change-Id: I4caeedbe8721b660df1c8502a0a42033f1d40a97 BUG: 1229282 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/11529 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp> REVIEW: http://review.gluster.org/11527 (quota/marker: fix mem leak in marker) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) COMMIT: http://review.gluster.org/11527 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit 3f9dae11173475d759adb16dd64bea9cef0bf1c4 Author: vmallika <vmallika> Date: Mon Jun 29 19:12:28 2015 +0530 quota/marker: fix mem leak in marker This is a backport of http://review.gluster.org/#/c/11457/ Problem-1) Now the marker accounting happens in background, There is a possibility that before completing create_xattr_txn another create txn can be initiated for the same inode. suppose if few 100 txns are initiated before completion, this can block all synctask threads waiting on a lock and this can also consume lot of memory and can take more time to complete the background accounting operation. This patch improves the locking mechanism which can improve the performance as well reduce memory consumption Problem-2) For every lookup and for all inodes in readdirp we were initiating a new txn, this can result in more txn pending in synctask queue and lead to huge memory consumption. inspect file/dir should start a txn only if there is some delta Problem-3) When there are multiple write operations on same inode and all the synctask threads are busy. As we are checking for updation_status flag in background, all txn will be move to synctask queue. This can increase the mem usage. Only one txn for inode in a queue will be sufficient, so check and set updation falg before moving txn to background > Change-Id: Ic42ce00f0a50ce51c7128ba68a1b6a0699a1cd14 > BUG: 1207735 > Signed-off-by: vmallika <vmallika> Change-Id: I52a05b99b19b97c79b69671120f53e05481f99cd BUG: 1229282 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/11527 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp> REVIEW: http://review.gluster.org/11593 (quota/marker: fix mem leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11595 (quota/marker: use smaller stacksize in synctask for marker updation) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11619 (quota/marker: inspect file/dir invoked without having quota xattrs requested) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11620 (quota/marker: fix mem-leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11619 (quota/marker: inspect file/dir invoked without having quota xattrs requested) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11619 (quota/marker: inspect file/dir invoked without having quota xattrs requested) posted (#3) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11620 (quota/marker: fix mem-leak in marker) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) REVIEW: http://review.gluster.org/11595 (quota/marker: use smaller stacksize in synctask for marker updation) posted (#3) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika) COMMIT: http://review.gluster.org/11595 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit c6de1e9de73e5ce08bf9099f14da74c2c1946132 Author: vmallika <vmallika> Date: Thu Jul 9 15:34:21 2015 +0530 quota/marker: use smaller stacksize in synctask for marker updation This is a backport of http://review.gluster.org/#/c/11499/ Default stacksize that synctask uses is 2M. For marker we set it to 16k Also move market xlator close to io-threads to have smaller stack > Change-Id: I8730132a6365cc9e242a3564a1e615d94ef2c651 > BUG: 1207735 > Signed-off-by: vmallika <vmallika> Change-Id: Id1cb6288a38d370956cc47aed5253ff95f04c966 BUG: 1229282 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/11595 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp> COMMIT: http://review.gluster.org/11620 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit 12987fab053db3893acd5a6cc71ed6a88843756a Author: vmallika <vmallika> Date: Sun Jul 12 21:03:54 2015 +0530 quota/marker: fix mem-leak in marker This is a backport of http://review.gluster.org/#/c/11617/ Free local in error paths > Change-Id: I76f69e7d746af8eedea34354ff5a6bf50234e50e > BUG: 1207735 > Signed-off-by: vmallika <vmallika> Change-Id: I0f87ee11970e7bf6f8c910d112fc988c2afd6eca BUG: 1229282 Signed-off-by: vmallika <vmallika> Reviewed-on: http://review.gluster.org/11620 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report. glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |