Bug 1795540 - mem leak while using gluster tools
Summary: mem leak while using gluster tools
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 7
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-28 09:31 UTC by Rafał Mielnik
Modified: 2023-09-14 05:50 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-02-10 07:50:36 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 24073 0 None Merged rpc: Cleanup SSL specific data at the time of freeing rpc object 2020-02-10 07:50:34 UTC

Description Rafał Mielnik 2020-01-28 09:31:12 UTC
Description of problem:

glusterfs process consuming more and more memory even on clusters with unused volumes, where "gluster volume status" is the only activity

Version-Release number of selected component (if applicable):

glusterfs-7.2-1.el7.x86_64 but the problem is known for me for a longer time (back to version 3)

How reproducible:

just use "gluster volume status" in a loop

Steps to Reproduce:
1. check glusterfs process memory usage
2. while (true); do gluster volume status ; sleep 1; done
3. wait couple of hours
4. check glusterfs process memory usage

Actual results:

memory usage grows up to the oom

Expected results:


Additional info:

Comment 1 Sanju 2020-01-28 10:44:02 UTC
Moving it to the right component as the leak is not with glusterd process.

Comment 2 Rafał Mielnik 2020-01-28 11:04:28 UTC
one more thing... it looks like it leaks more (only?) if using ssl (client.ssl/server.ssl "on"). i've always used clusters with ssl but i've spotted some cluster without ssl and with big uptime and without "memleak like" behaviour. i've run loop again on cssluster after making ssl off and after 1 hour the difference in memory usage is negligible (still, going to wait more hours)


my custom settings (3 nodes):

gluster volume set volume0 cluster.server-quorum-type server
gluster volume set volume0 diagnostics.count-fop-hits yes
gluster volume set volume0 diagnostics.latency-measurement yes

gluster volume set volume0 client.ssl on
gluster volume set volume0 server.ssl on
gluster volume set volume0 ssl.cipher-list 'HIGH:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1:TLSv1.2:!3DES:!RC4:!aNULL:!ADH'

Comment 3 Sanju 2020-01-28 11:22:56 UTC
I remember Mohit fixing a memleak issue in a SSL enabled environment. Not sure, whether we have the fix in release 7.1

Mohit, please confirm.

Thanks,
Sanju

Comment 4 Worker Ant 2020-01-28 12:53:35 UTC
REVIEW: https://review.gluster.org/24073 (rpc: Cleanup SSL specific data at the time of freeing rpc object) posted (#1) for review on release-7 by MOHIT AGRAWAL

Comment 5 Mohit Agrawal 2020-01-28 12:55:58 UTC
The patch is not backported in release-7 so posted a patch to resolve the same.

Comment 6 Worker Ant 2020-02-10 07:50:36 UTC
REVIEW: https://review.gluster.org/24073 (rpc: Cleanup SSL specific data at the time of freeing rpc object) merged (#3) on release-7 by Rinku Kothiya

Comment 7 Red Hat Bugzilla 2023-09-14 05:50:51 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.