Bug 1229282 - Disperse volume: Huge memory leak of glusterfsd process
Summary: Disperse volume: Huge memory leak of glusterfsd process
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: 3.7.0
Hardware: All
OS: All
high
urgent
Target Milestone: ---
Assignee: Vijaikumar Mallikarjuna
QA Contact:
URL:
Whiteboard:
Depends On: 1207735 1259697
Blocks: qe_tracker_everglades 1224177 glusterfs-3.7.3
TreeView+ depends on / blocked
 
Reported: 2015-06-08 11:12 UTC by Vijaikumar Mallikarjuna
Modified: 2016-05-11 22:49 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.3
Doc Type: Bug Fix
Doc Text:
Clone Of: 1207735
Environment:
Last Closed: 2015-07-30 09:50:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Vijaikumar Mallikarjuna 2015-06-08 11:12:17 UTC
+++ This bug was initially created as a clone of Bug #1207735 +++

Description of problem:
=======================
There's a huge memory leak in glusterfsd process with disperse volume. Created a plain disperse volume and converted to distributed-disperse. There's no IO from the clients but seeing the resident memory reaching upto 20GB as seen from top command for the glusterfsd process and the system becomes unresponsive as the whole memory gets consumed.

Version-Release number of selected component (if applicable):
=============================================================
[root@vertigo geo-master]# gluster --version
glusterfs 3.7dev built on Mar 31 2015 01:05:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

Additional info:
================
Top output of node1:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND               
 9902 root      20   0 4321m 1.4g 2920 D 20.0  4.5   1:28.68 glusterfsd            
10758 root      20   0 4321m 1.4g 2920 D 18.4  4.5   1:26.33 glusterfsd            
10053 root      20   0 4961m 1.6g 2920 D 18.1  5.2   1:28.64 glusterfsd            
10729 root      20   0 3681m 1.0g 2920 D 17.1  3.3   1:26.60 glusterfsd            
10759 root      20   0 4321m 1.4g 2920 S 17.1  4.5   1:25.68 glusterfsd            
10756 root      20   0 3745m 1.4g 2920 S 16.4  4.6   1:30.05 glusterfsd            
 9939 root      20   0 4321m 1.4g 2920 S 16.4  4.5   1:27.61 glusterfsd            
10775 root      20   0 4961m 1.6g 2920 D 15.8  5.2   1:26.52 glusterfsd            
10723 root      20   0 3745m 1.4g 2920 S 15.8  4.6   1:32.41 glusterfsd            
10728 root      20   0 34.0g  19g 2920 S 15.8 63.3   1:31.89 glusterfsd            
10054 root      20   0 3681m 1.0g 2920 D 15.8  3.3   1:28.10 glusterfsd            
10090 root      20   0 3681m 1.0g 2920 S 15.8  3.3   1:33.02 glusterfsd            
10789 root      20   0 3681m 1.0g 2920 D 15.8  3.3   1:26.16 glusterfsd            
10739 root      20   0 4961m 1.6g 2920 D 15.4  5.2   1:31.29 glusterfsd            
10763 root      20   0 4961m 1.6g 2920 S 15.4  5.2   1:27.03 glusterfsd            
10727 root      20   0 34.0g  19g 2920 S 15.4 63.3   1:31.35 glusterfsd            
10782 root      20   0 34.0g  19g 2920 S 15.4 63.3   1:31.86 glusterfsd            
10062 root      20   0 3425m 1.1g 2920 S 15.4  3.5   1:44.85 glusterfsd            
10783 root      20   0 3681m 1.0g 2920 D 15.4  3.3   1:26.73 glusterfsd            
 9940 root      20   0 4321m 1.4g 2920 S 15.4  4.5   1:28.84 glusterfsd            
10724 root      20   0 4321m 1.4g 2920 D 15.4  4.5   1:25.27 glusterfsd            
10753 root      20   0 4321m 1.4g 2920 S 15.4  4.5   1:26.44 glusterfsd            
10733 root      20   0 3745m 1.4g 2920 R 15.1  4.6   1:28.42 glusterfsd            
10755 root      20   0 3745m 1.4g 2920 S 15.1  4.6   1:31.19 glusterfsd            
10091 root      20   0 34.0g  19g 2920 S 15.1 63.3   1:33.56 glusterfsd            
10778 root      20   0 34.0g  19g 2920 S 15.1 63.3   1:31.88 glusterfsd            
 9894 root      20   0 3681m 1.0g 2920 D 15.1  3.3   1:32.51 glusterfsd            
10736 root      20   0 3681m 1.0g 2920 S 15.1  3.3   1:27.33 glusterfsd            
10746 root      20   0 4321m 1.4g 2920 D 15.1  4.5   1:25.14 glusterfsd            
10744 root      20   0 4961m 1.6g 2920 S 14.8  5.2   1:29.22 glusterfsd            
10743 root      20   0 3745m 1.4g 2920 S 14.8  4.6   1:29.96 glusterfsd            
10784 root      20   0 34.0g  19g 2920 S 14.8 63.3   1:31.92 glusterfsd            
 9735 root      20   0 4961m 1.6g 2920 S 14.4  5.2   1:28.84 glusterfsd            
 9903 root      20   0 4961m 1.6g 2920 S 14.4  5.2   1:28.63 glusterfsd    

Attaching the statedumps of the volumes.

--- Additional comment from Bhaskarakiran on 2015-03-31 11:18:11 EDT ---



--- Additional comment from Bhaskarakiran on 2015-05-05 01:17:36 EDT ---

On recent builds, seeing bricks and nfs servers getting crashed with OOM messages. sequence of events that happen are :

1. client mount hangs
2. brick crashes
3. export of volume is not shown with rpcinfo
4. nfs server crashes with OOM.

--- Additional comment from Xavier Hernandez on 2015-05-06 12:03:14 EDT ---

I've tried to reproduce this issue with current master and I've been unable.

Do you do anything else besides the add-brick and rebalance ?

--- Additional comment from Bhaskarakiran on 2015-05-13 06:14:49 EDT ---

Even with the plain disperse volume and nfs mount the issue persists on 3.7 beta2 build. NFS mounted the volume and ran iozone -a couple of times and seeing the leak. The process is taking almost 40g.

14314 root      20   0 17.1g 8.0g 2528 S 20.0 12.7  41:15.49 glusterfsd                            
14396 root      20   0 17.1g 8.0g 2528 S 19.4 12.7  42:16.27 glusterfsd                            
14397 root      20   0 17.1g 8.0g 2528 S 19.4 12.7  43:34.59 glusterfsd                            
14721 root      20   0 17.1g 8.0g 2528 S 19.4 12.7  43:08.11 glusterfsd                            
14697 root      20   0 17.1g 8.0g 2528 S 19.0 12.7  41:04.22 glusterfsd                            
14702 root      20   0 17.1g 8.0g 2528 S 19.0 12.7  41:13.08 glusterfsd                            
14722 root      20   0 17.1g 8.0g 2528 S 19.0 12.7  40:32.11 glusterfsd                            
14713 root      20   0 65.3g  40g 2528 S 18.7 64.5  40:38.43 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                          
14735 root      20   0 65.3g  40g 2528 S 18.7 64.5  41:52.18 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                          
14392 root      20   0 17.1g 8.0g 2528 S 18.7 12.7  43:33.64 glusterfsd                            
14704 root      20   0 17.1g 8.0g 2528 S 18.7 12.7  41:59.24 glusterfsd                            
14714 root      20   0 65.3g  40g 2528 S 18.4 64.5  39:08.16 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14737 root      20   0 65.3g  40g 2528 S 18.4 64.5  41:03.79 glusterfsd                            
14701 root      20   0 17.1g 8.0g 2528 S 18.4 12.7  41:18.25 glusterfsd                            
14684 root      20   0 10.3g 4.4g 2532 S 18.4  7.0  38:15.19 glusterfsd                            
14388 root      20   0 65.3g  40g 2528 S 18.1 64.5  40:20.30 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14716 root      20   0 65.3g  40g 2528 R 18.1 64.5  40:24.51 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14736 root      20   0 65.3g  40g 2528 R 18.1 64.5  38:40.43 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14703 root      20   0 17.1g 8.0g 2528 S 18.1 12.7  41:06.25 glusterfsd                            
14331 root      20   0 10.3g 4.4g 2532 S 18.1  7.0  38:29.85 glusterfsd                            
14294 root      20   0 65.3g  40g 2528 R 17.7 64.5  38:03.70 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14395 root      20   0 65.3g  40g 2528 R 17.7 64.5  38:51.38 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14705 root      20   0 17.1g 8.0g 2528 S 17.7 12.7  43:05.49 glusterfsd                            
14723 root      20   0 17.1g 8.0g 2528 R 17.7 12.7  42:20.05 glusterfsd                            
14740 root      20   0 17.1g 8.0g 2528 S 17.7 12.7  39:55.02 glusterfsd                            
14389 root      20   0 10.3g 4.4g 2532 S 17.7  7.0  39:52.06 glusterfsd                            
14675 root      20   0 10.3g 4.4g 2532 S 17.7  7.0  38:26.46 glusterfsd                            
14678 root      20   0 65.3g  40g 2528 S 17.4 64.5  40:18.39 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14734 root      20   0 65.3g  40g 2528 S 17.4 64.5  39:07.99 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14328 root      20   0 10.3g 4.4g 2532 S 17.4  7.0  38:01.29 glusterfsd                            
14393 root      20   0 10.3g 4.4g 2532 S 17.4  7.0  39:14.94 glusterfsd                            
14683 root      20   0 10.3g 4.4g 2532 S 17.4  7.0  38:10.70 glusterfsd                            
14696 root      20   0 65.3g  40g 2528 S 17.1 64.5  39:26.60 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14390 root      20   0 17.1g 8.0g 2528 S 17.1 12.7  41:03.34 glusterfsd                            
14724 root      20   0 17.1g 8.0g 2528 S 17.1 12.7  41:06.26 glusterfsd                            
14329 root      20   0 10.3g 4.4g 2532 S 17.1  7.0  38:46.04 glusterfsd                            
14712 root      20   0 10.3g 4.4g 2532 S 17.1  7.0  38:18.10 glusterfsd                            
14297 root      20   0 65.3g  40g 2528 S 16.7 64.5  40:29.80 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                           
14670 root      20   0 65.3g  40g 2528 S 16.7 64.5  39:24.16 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14700 root      20   0 65.3g  40g 2528 R 16.7 64.5  40:00.28 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14715 root      20   0 65.3g  40g 2528 S 16.7 64.5  40:53.39 glusterfsd  >>>>>>>>>>>>>>>>>>>>>>>                            
14311 root      20   0 17.1g 8.0g 2528 S 16.7 12.7  39:05.23 glusterfsd                            
14706 root      20   0 10.3g 4.4g 2532 S 16.7  7.0  37:28.30 glusterfsd                            
14707 root      20   0 10.3g 4.4g 2532 S 16.7  7.0  37:52.83 glusterfsd

--- Additional comment from Xavier Hernandez on 2015-05-14 03:18:35 EDT ---

Thanks, I'll try again with NFS and iozone.

--- Additional comment from Anuradha on 2015-05-22 07:37:03 EDT ---

Bhaskarakiran,

Do you have sos-reports corresponding to the statedump attached? I need to go through logs to understand the state of the system.

--- Additional comment from Anand Avati on 2015-06-02 07:24:57 EDT ---

REVIEW: http://review.gluster.org/11044 (fd: Do fd_bind on successful open) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

--- Additional comment from Pranith Kumar K on 2015-06-02 07:25:46 EDT ---

This patch only fixes wrong fd_count being shown in statedump because fd_binds were not happening. Still looking into more fd leaks.

--- Additional comment from Anand Avati on 2015-06-02 07:50:04 EDT ---

REVIEW: http://review.gluster.org/11044 (fd: Do fd_bind on successful open) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

--- Additional comment from Anand Avati on 2015-06-02 08:29:01 EDT ---

REVIEW: http://review.gluster.org/11045 (features/quota: Fix ref-leak) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

--- Additional comment from Anand Avati on 2015-06-04 00:42:30 EDT ---

COMMIT: http://review.gluster.org/11045 committed in master by Raghavendra G (rgowdapp) 
------
commit 2b7ae84a5feb636f0e41d0ab36c04b7f3fbce520
Author: Pranith Kumar K <pkarampu>
Date:   Tue Jun 2 17:58:00 2015 +0530

    features/quota: Fix ref-leak
    
    Change-Id: I0b44b70f07be441e044d9dfc5c2b64bd5b4cac18
    BUG: 1207735
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/11045
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 1 Anand Avati 2015-06-08 11:13:04 UTC
REVIEW: http://review.gluster.org/11124 (features/quota: Fix ref-leak) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 2 Anand Avati 2015-06-09 06:17:39 UTC
REVIEW: http://review.gluster.org/11124 (features/quota: Fix ref-leak) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 3 Anand Avati 2015-06-11 10:52:43 UTC
COMMIT: http://review.gluster.org/11124 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit bc743c012aca8b5854baf1b71a9ec9591c378645
Author: Pranith Kumar K <pkarampu>
Date:   Tue Jun 2 17:58:00 2015 +0530

    features/quota: Fix ref-leak
    
    This is a backport of http://review.gluster.org/#/c/11045
    
    > Change-Id: I0b44b70f07be441e044d9dfc5c2b64bd5b4cac18
    > BUG: 1207735
    > Signed-off-by: Pranith Kumar K <pkarampu>
    > Reviewed-on: http://review.gluster.org/11045
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > Tested-by: Raghavendra G <rgowdapp>
    > Signed-off-by: vmallika <vmallika>
    
    Change-Id: Id740d74fb5cf7a9b23027dbbb0a9f42616dcf2fc
    BUG: 1229282
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/11124
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 4 Vijaikumar Mallikarjuna 2015-06-19 05:57:30 UTC
Patch submitted: http://review.gluster.org/#/c/11321/

Comment 5 Anand Avati 2015-06-19 07:14:02 UTC
REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 6 Anand Avati 2015-06-19 09:04:03 UTC
REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#3) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 7 Niels de Vos 2015-06-20 09:50:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report.

glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Anand Avati 2015-06-22 06:13:24 UTC
REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#5) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 9 Anand Avati 2015-06-22 08:49:55 UTC
REVIEW: http://review.gluster.org/11321 (quota/marker: fix mem-leak, free contribution node) posted (#6) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 10 Vijaikumar Mallikarjuna 2015-06-25 09:08:45 UTC
Memory leak is still seen with the glusterfs-3.7.2, so reopening the bug

Comment 11 Anand Avati 2015-06-25 09:13:24 UTC
REVIEW: http://review.gluster.org/11401 (quota/marker: fix mem-leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 12 Anand Avati 2015-06-26 13:00:22 UTC
REVIEW: http://review.gluster.org/11401 (quota/marker: fix mem-leak in marker) posted (#4) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 13 Anand Avati 2015-06-27 05:32:44 UTC
COMMIT: http://review.gluster.org/11401 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit 08586ee518de438fe2bbbaa74ae4c9a02a5d88cf
Author: vmallika <vmallika>
Date:   Wed Jun 24 11:56:30 2015 +0530

    quota/marker: fix mem-leak in marker
    
    This is a backport of http://review.gluster.org/#/c/11361/
    
    > When removing contribution xattr, we also need to free
    > contribution node in memory
    > Use ref/unref mechanism to handle contribution node memory
    >
    > local->xdata should be freed in mq_local_unref
    >
    > There is another huge memory consumption happens
    > in function mq_inspect_directory_xattr_task
    > where dirty flag is not set
    >
    > Change-Id: Ieca3ab4bf410c51259560e778bce4e81b9d888bf
    > BUG: 1207735
    > Signed-off-by: vmallika <vmallika>
    > Reviewed-on: http://review.gluster.org/11361
    > Reviewed-by: Krishnan Parthasarathi <kparthas>
    > Tested-by: NetBSD Build System <jenkins.org>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > Tested-by: Raghavendra G <rgowdapp>
    
    Change-Id: I3038b41307f30867fa728054469ba917fd625e95
    BUG: 1229282
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/11401
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 14 Vijaikumar Mallikarjuna 2015-07-03 12:09:19 UTC
Mem leak is still seen, so reopening bug

Comment 15 Anand Avati 2015-07-03 12:09:44 UTC
REVIEW: http://review.gluster.org/11527 (quota/marker: fix mem leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 16 Anand Avati 2015-07-03 12:13:01 UTC
REVIEW: http://review.gluster.org/11528 (posix: fix mem-leak in posix_get_ancestry error path) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 17 Anand Avati 2015-07-03 12:19:12 UTC
REVIEW: http://review.gluster.org/11529 (quota: fix mem leak in quota enforcer) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 18 Anand Avati 2015-07-07 11:01:22 UTC
COMMIT: http://review.gluster.org/11529 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit 74a143100fa4b9532d37bed39504dcea9d371d18
Author: vmallika <vmallika>
Date:   Fri Jul 3 17:32:04 2015 +0530

    quota: fix mem leak in quota enforcer
    
    This is a backport of review.gluster.org/#/c/11526/
    
    Do inode_unref on parent
    
    > Change-Id: I21d82eb8716dd73aa2dc291b3ae8506e4fb4ea8b
    > BUG: 1207735
    > Signed-off-by: vmallika <vmallika>
    
    Change-Id: I4caeedbe8721b660df1c8502a0a42033f1d40a97
    BUG: 1229282
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/11529
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 19 Anand Avati 2015-07-07 14:21:57 UTC
REVIEW: http://review.gluster.org/11527 (quota/marker: fix mem leak in marker) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 20 Anand Avati 2015-07-08 05:34:58 UTC
COMMIT: http://review.gluster.org/11527 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit 3f9dae11173475d759adb16dd64bea9cef0bf1c4
Author: vmallika <vmallika>
Date:   Mon Jun 29 19:12:28 2015 +0530

    quota/marker: fix mem leak in marker
    
    This is a backport of http://review.gluster.org/#/c/11457/
    
    Problem-1)
    Now the marker accounting happens in background,
    There is a possibility that before completing
    create_xattr_txn another create txn can be initiated
    for the same inode.
    suppose if few 100 txns are initiated
    before completion, this can block all synctask threads
    waiting on a lock and this can also consume lot of memory
    and can take more time to complete the background accounting
    operation.
    
    This patch improves the locking mechanism which
    can improve the performance as well reduce memory
    consumption
    
    Problem-2)
    For every lookup and for all inodes in readdirp
    we were initiating a new txn, this can result
    in more txn pending in synctask queue and
    lead to huge memory consumption. inspect
    file/dir should start a txn only if there
    is some delta
    
    Problem-3)
    When there are multiple write operations on
    same inode and all the synctask threads are busy.
    As we are checking for updation_status
    flag in background, all txn will be move to synctask queue.
    This can increase the mem usage.
    
    Only one txn for inode in a queue will be sufficient,
    so check and set updation falg before moving txn to
    background
    
    > Change-Id: Ic42ce00f0a50ce51c7128ba68a1b6a0699a1cd14
    > BUG: 1207735
    > Signed-off-by: vmallika <vmallika>
    
    Change-Id: I52a05b99b19b97c79b69671120f53e05481f99cd
    BUG: 1229282
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/11527
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 21 Anand Avati 2015-07-09 09:46:29 UTC
REVIEW: http://review.gluster.org/11593 (quota/marker: fix mem leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 22 Anand Avati 2015-07-09 10:13:06 UTC
REVIEW: http://review.gluster.org/11595 (quota/marker: use smaller stacksize in synctask for marker updation) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 23 Anand Avati 2015-07-10 11:13:28 UTC
REVIEW: http://review.gluster.org/11619 (quota/marker: inspect file/dir invoked without having quota xattrs requested) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 24 Anand Avati 2015-07-10 11:18:38 UTC
REVIEW: http://review.gluster.org/11620 (quota/marker: fix mem-leak in marker) posted (#1) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 25 Anand Avati 2015-07-10 13:30:23 UTC
REVIEW: http://review.gluster.org/11619 (quota/marker: inspect file/dir invoked without having quota xattrs requested) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 26 Anand Avati 2015-07-13 09:11:50 UTC
REVIEW: http://review.gluster.org/11619 (quota/marker: inspect file/dir invoked without having quota xattrs requested) posted (#3) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 27 Anand Avati 2015-07-14 06:34:26 UTC
REVIEW: http://review.gluster.org/11620 (quota/marker: fix mem-leak in marker) posted (#2) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 28 Anand Avati 2015-07-14 06:38:56 UTC
REVIEW: http://review.gluster.org/11595 (quota/marker: use smaller stacksize in synctask for marker updation) posted (#3) for review on release-3.7 by Vijaikumar Mallikarjuna (vmallika)

Comment 29 Anand Avati 2015-07-14 11:27:39 UTC
COMMIT: http://review.gluster.org/11595 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit c6de1e9de73e5ce08bf9099f14da74c2c1946132
Author: vmallika <vmallika>
Date:   Thu Jul 9 15:34:21 2015 +0530

    quota/marker: use smaller stacksize in synctask for marker updation
    
    This is a backport of http://review.gluster.org/#/c/11499/
    
    Default stacksize that synctask uses is 2M.
    For marker we set it to 16k
    
    Also move market xlator close to io-threads
    to have smaller stack
    
    > Change-Id: I8730132a6365cc9e242a3564a1e615d94ef2c651
    > BUG: 1207735
    > Signed-off-by: vmallika <vmallika>
    
    Change-Id: Id1cb6288a38d370956cc47aed5253ff95f04c966
    BUG: 1229282
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/11595
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 30 Anand Avati 2015-07-15 04:38:53 UTC
COMMIT: http://review.gluster.org/11620 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit 12987fab053db3893acd5a6cc71ed6a88843756a
Author: vmallika <vmallika>
Date:   Sun Jul 12 21:03:54 2015 +0530

    quota/marker: fix mem-leak in marker
    
    This is a backport of http://review.gluster.org/#/c/11617/
    
    Free local in error paths
    
    > Change-Id: I76f69e7d746af8eedea34354ff5a6bf50234e50e
    > BUG: 1207735
    > Signed-off-by: vmallika <vmallika>
    
    Change-Id: I0f87ee11970e7bf6f8c910d112fc988c2afd6eca
    BUG: 1229282
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/11620
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 31 Kaushal 2015-07-30 09:50:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 32 Kaushal 2015-07-30 09:50:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 33 Kaushal 2015-07-30 09:51:05 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 34 Kaushal 2015-07-30 09:51:05 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 35 Kaushal 2015-07-30 09:51:05 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 36 Kaushal 2015-07-30 09:51:05 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 37 Kaushal 2015-07-30 09:51:22 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 38 Kaushal 2015-07-30 09:51:23 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.