Bug 1289442 - high memory usage on client node
Summary: high memory usage on client node
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: mainline
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: Vijay Bellur
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1371544 1371547
TreeView+ depends on / blocked
 
Reported: 2015-12-08 06:47 UTC by hojin kim
Modified: 2019-05-09 10:01 UTC (History)
3 users (show)

Fixed In Version: glusterfs-6.x
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1371544 1371547 (view as bug list)
Environment:
Last Closed: 2019-05-09 10:01:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
It's client's sosreport in suffering memory leak issue (WAS system) (898.73 KB, application/x-xz)
2016-01-13 05:07 UTC, hojin kim
no flags Details
It's client's sosreport without memory leak issue (WAS system) (885.32 KB, application/x-xz)
2016-01-13 05:12 UTC, hojin kim
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1126831 0 high CLOSED Memory leak in GlusterFs client 2023-09-14 02:45:09 UTC

Internal Links: 1126831

Description hojin kim 2015-12-08 06:47:05 UTC
Description of problem:
high memory leak on client node on ubuntu 12.04 TLS

Description of problem:

client glusterfs daemon use high memory at glusterfs 3.6.3

I saw the pmap data

pmap (glusterfs pid) | grep anon

00007f6cd0000000 131072K rw---    [ anon ]
00007f6cd8000000 131072K rw---    [ anon ]
00007f6ce0000000 131072K rw---    [ anon ]
00007f6ce8000000 131072K rw---    [ anon ]
00007f6cf0000000 131072K rw---    [ anon ]
00007f6cf8000000 131072K rw---    [ anon ]
00007f6d00000000 131072K rw---    [ anon ]
00007f6d08000000 131072K rw---    [ anon ]
... 
00007f6d4b385000 3450888K rw---    [ anon ]

There are many memory usage by anon 

I think it;s memory leak. 

Steps to Reproduce:

no

Actual results:

no

Expected results:


Additional info:

I gathered sosreport, pmap, /proc/gluster pid/status, lsof .

Comment 1 hojin kim 2015-12-09 02:10:53 UTC
I uploaded the sosreport.and 3 txt files..
It's data for client
was01, was02 have very big [anon] area(about 3GB) by pmap command

was01
                        54 [anon] area 
00007f6d49c04000     76K r-x--  /usr/sbin/glusterfsd
00007f6d49e16000      4K r----  /usr/sbin/glusterfsd
00007f6d49e17000      8K rw---  /usr/sbin/glusterfsd
00007f6d4b33d000    288K rw---    [ anon ]
00007f6d4b385000 3450888K rw---    [ anon ]
00007fff26b32000    132K rw---    [ stack ]
00007fff26bf3000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

was02
00007fdef3016000     76K r-x--  /usr/sbin/glusterfsd
00007fdef3228000      4K r----  /usr/sbin/glusterfsd
00007fdef3229000      8K rw---  /usr/sbin/glusterfsd
00007fdef36fb000    288K rw---    [ anon ]
00007fdef3743000 3096552K rw---    [ anon ]
00007fff23fe8000    132K rw---    [ stack ]
00007fff241f8000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

but, at was03, there are a little size [anon] file 

00007fa42b6e9000     76K r-x--  /usr/sbin/glusterfsd
00007fa42b8fb000      4K r----  /usr/sbin/glusterfsd
00007fa42b8fc000      8K rw---  /usr/sbin/glusterfsd
00007fa42c5de000    288K rw---    [ anon ]
00007fa42c626000  32804K rw---    [ anon ]


please check it ..Thanks

Comment 2 Vijay Bellur 2016-01-06 09:16:18 UTC
I don't see the sosreport attached. Can you please provide more details on the gluster volume configuration and the nature of I/O operations being performed on the client? Thanks.

Comment 3 hojin kim 2016-01-13 05:07:07 UTC
Created attachment 1114275 [details]
It's client's sosreport in suffering memory leak issue  (WAS system)

I will upload 2 files..

The first is sosreport-UK1-PRD-WAS01-20151202110452.tar.xz .
This server is for WAS service. and client of glusterfs file server.
In case of UK1-PRD-WAS01, glusterfs use about 5.2G memory usage.

Comment 4 hojin kim 2016-01-13 05:12:57 UTC
Created attachment 1114276 [details]
It's client's sosreport without memory leak issue (WAS system)

It's normal WAS system. There is no issue about memory. 
It's sosreport-UK1-PRD-WAS03-20151202112104.tar.xz

Of course, It's same env like UK1-PRD-WAS01
But, memory usage is about 0.4G.

Comment 5 hojin kim 2016-01-13 05:22:37 UTC
Hi, Vijay. 
I uploaded 2 client files, 

sosreport-UK1-PRD-WAS01-20151202110452.tar.xz . --> Memory usage of glusterfs is high 
sosreport-UK1-PRD-WAS03-20151202112104.tar.xz --> Memory usage of glusterfs is low

and glusterfs Client system is WAS system,and normally I/O was made by WAS(tomcat) client.


the configuration is as below. It;s is server env.

--------------------------------------------------------------
mount volume 

server1: UK1-PRD-FS01     UK2-PRD-FS01 ==> replicated volume0
                  |                        |
             distributed          distributed
                  |                         |
server2: UK1-PRD-FS02    UK2-PRD-FS02 ==> replicated volume1
                 | 
                 |
                 |
                 |
georeplicate  ukdr

===============================================
client  

UK1-PRD-WAS01 (attached at UK1-PRD-FS01) --> memory problem (uploaded)
UK1-PRD-WAS02 (attached at UK1-PRD-FS02) --> memory problem 
UK1-PRD-WAS03 (attached at UK1-PRD-FS01) --> no memory problem  (uploaded)
..........about 10 machines

Comment 6 hojin kim 2016-02-17 04:00:43 UTC
(In reply to Vijay Bellur from comment #2)
> I don't see the sosreport attached. Can you please provide more details on
> the gluster volume configuration and the nature of I/O operations being
> performed on the client? Thanks.

please review again. we are waiting for ur response

Comment 7 Amar Tumballi 2019-05-09 10:01:34 UTC
With fixing https://bugzilla.redhat.com/show_bug.cgi?id=1560969 we find that these issues are now fixed.


Note You need to log in before you can comment on or make changes to this bug.