Bug 1810924 - [SSL]: Memory leak by glusterfsd (issue heal info indefinitely to reproduce) when SSL enabled for Management layer
Summary: [SSL]: Memory leak by glusterfsd (issue heal info indefinitely to reproduce) ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Mohit Agrawal
QA Contact: Sayalee
URL:
Whiteboard:
Depends On:
Blocks: 1848894
TreeView+ depends on / blocked
 
Reported: 2020-03-06 08:39 UTC by Nag Pavan Chilakam
Modified: 2024-06-13 22:29 UTC (History)
14 users (show)

Fixed In Version: glusterfs-6.0-38
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1848894 (view as bug list)
Environment:
Last Closed: 2020-12-17 04:51:18 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:51:41 UTC

Description Nag Pavan Chilakam 2020-03-06 08:39:15 UTC
Description of problem:
======================
When we enable SSL for management layer , glusterfsd is leaking memory.
This can be observed, by issuing heal info indefinitely again and again.

for more details refer to https://bugzilla.redhat.com/show_bug.cgi?id=1785577#c32

the same test on a regular non ssl cluster doesnt leak memory

Version-Release number of selected component (if applicable):
=============================
glusterfs-server-6.0-30.1.el7rhgs.1.HOTFIX.Case02532384.BZ1785577.x86_64

How reproducible:
==============
always

Steps to Reproduce:
===================
1.created a 6 node cluster
2. enabled SSL for the management layer alone on all 6 nodes and 1 client by following the steps provided in https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/index#part-Security

3. created a 2x3 vol with one brick each hosted by each node (no brick mux)
4. mounted volume on the client
5) issued heal info continuously
while true; do gluster v heal <vname> info ;done
 

Actual results:
==============
we can see that glusterfsd memory consumptions spikes rapidly at about 30MB every 2 minutes or so.
I retried by inducing a sleep of 5 b/w each iteration ie
while true; do gluster v heal <vname> info ; sleep 5;done

still the memory consumption kept spiking, although at a slower pace, about 500MB in 13-14hrs

Expected results:
=======
no leak should be seen

Additional info:
============
[root@dhcp35-194 glusterfs]# gluster v info
 
Volume Name: sslvol
Type: Distributed-Replicate
Volume ID: 232077de-f462-4fc3-8ab9-0f238c0efe3a
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dhcp35-182.lab.eng.blr.redhat.com:/gluster/brick2/sslvol
Brick2: dhcp35-173.lab.eng.blr.redhat.com:/gluster/brick2/sslvol
Brick3: dhcp35-108.lab.eng.blr.redhat.com:/gluster/brick2/sslvol
Brick4: dhcp35-43.lab.eng.blr.redhat.com:/gluster/brick2/sslvol
Brick5: dhcp35-42.lab.eng.blr.redhat.com:/gluster/brick2/sslvol
Brick6: dhcp35-194.lab.eng.blr.redhat.com:/gluster/brick2/sslvol
Options Reconfigured:
server.ssl: off
client.ssl: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@dhcp35-194 glusterfs]#

Comment 32 errata-xmlrpc 2020-12-17 04:51:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603

Comment 33 Red Hat Bugzilla 2023-09-14 05:53:58 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.