Bug 818161 - memory leaks since 3.2.6
memory leaks since 3.2.6
Status: CLOSED NOTABUG
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
3.2.6
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Amar Tumballi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-05-02 07:06 EDT by Calin Don
Modified: 2013-12-18 19:08 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-06-18 05:31:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Calin Don 2012-05-02 07:06:05 EDT
Hi,

I have a setup with 7 gluster nodes, 5 of them are running 3.2.5 and 2 are running 3.2.6.

The storage is on two nodes of the 5 running 3.2.5.

On the ones with 3.2.6 the memory usage of glusterfs is constantly growing, and reached 10GB in about two weeks while on the nodes with 3.2.5 it is steady between 300 and 500 MB.

Let me know how to proceed further.

Thanks,
Calin
Comment 1 Amar Tumballi 2012-05-02 12:39:28 EDT
Hi Calin,

Thanks for the info, and can you please let us know the general pattern of operations made on mount point? also the type of the volume (ie, gluster volume info).
Comment 2 Calin Don 2012-05-09 10:13:18 EDT
Hi Amar,

This is the mount point in question:

localhost:/instances on /instances type nfs (rw,noatime,mountproto=tcp,intr,ac,acregmax=120,acdirmax=120,acregmin=10,acdirmin=60,nolock,addr=127.0.0.1,mountaddr=127.0.0.1)

I use NFS over Gluster and mount it from localhost as described here: http://community.gluster.org/a/nfs-performance-with-fuse-client-redundancy/

It is used for serving web applications (some php files + static files).

Here is the output of # gluster volume info

Volume Name: instances
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: data1-z1.presslabs.net:/bricks/instances
Brick2: data2-z1.presslabs.net:/bricks/instances
Comment 3 Amar Tumballi 2012-05-10 03:43:45 EDT
Calin, Can you try our latest qa release http://bits.gluster.com/pub/gluster/glusterfs/src/glusterfs-3.3.0qa40.tar.gz if you are not yet on production? This would help us to see if this is already a fixed issue in upcoming 3.3.0 release.
Comment 4 Calin Don 2012-05-10 05:27:51 EDT
(In reply to comment #3)
> Calin, Can you try our latest qa release
> http://bits.gluster.com/pub/gluster/glusterfs/src/glusterfs-3.3.0qa40.tar.gz if
> you are not yet on production? This would help us to see if this is already a
> fixed issue in upcoming 3.3.0 release.

Unfortunately all the nodes are in production. Is anything I can do to track down these issues?
Comment 5 Amar Tumballi 2012-06-07 06:54:03 EDT
Calin Don,

We made the release of GlusterFS-3.3.0, Please upgrade to that version which should fix your issue.

Regards,
Amar
Comment 6 Calin Don 2012-06-18 05:31:35 EDT
Although I didn't upgrade to 3.3 yet, found that there were no issues with 3.2.6. The memory increase was caused by some problems with hostnames resolutions; the peers running 3.2.6 could not be resolved by the other peers in the network.

Note You need to log in before you can comment on or make changes to this bug.