Bug 1371547 - high memory usage on client node
Summary: high memory usage on client node
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.7.15
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: Vijay Bellur
QA Contact:
URL:
Whiteboard:
Depends On: 1289442
Blocks: 1371544
TreeView+ depends on / blocked
 
Reported: 2016-08-30 12:55 UTC by hari gowtham
Modified: 2017-03-08 11:02 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1289442
Environment:
Last Closed: 2017-03-08 11:02:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hari gowtham 2016-08-30 12:55:25 UTC
+++ This bug was initially created as a clone of Bug #1289442 +++

Description of problem:
high memory leak on client node on ubuntu 12.04 TLS

Description of problem:

client glusterfs daemon use high memory at glusterfs 3.6.3

I saw the pmap data

pmap (glusterfs pid) | grep anon

00007f6cd0000000 131072K rw---    [ anon ]
00007f6cd8000000 131072K rw---    [ anon ]
00007f6ce0000000 131072K rw---    [ anon ]
00007f6ce8000000 131072K rw---    [ anon ]
00007f6cf0000000 131072K rw---    [ anon ]
00007f6cf8000000 131072K rw---    [ anon ]
00007f6d00000000 131072K rw---    [ anon ]
00007f6d08000000 131072K rw---    [ anon ]
... 
00007f6d4b385000 3450888K rw---    [ anon ]

There are many memory usage by anon 

I think it;s memory leak. 

Steps to Reproduce:

no

Actual results:

no

Expected results:


Additional info:

I gathered sosreport, pmap, /proc/gluster pid/status, lsof .

--- Additional comment from hojin kim on 2015-12-08 21:10:53 EST ---

I uploaded the sosreport.and 3 txt files..
It's data for client
was01, was02 have very big [anon] area(about 3GB) by pmap command

was01
                        54 [anon] area 
00007f6d49c04000     76K r-x--  /usr/sbin/glusterfsd
00007f6d49e16000      4K r----  /usr/sbin/glusterfsd
00007f6d49e17000      8K rw---  /usr/sbin/glusterfsd
00007f6d4b33d000    288K rw---    [ anon ]
00007f6d4b385000 3450888K rw---    [ anon ]
00007fff26b32000    132K rw---    [ stack ]
00007fff26bf3000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

was02
00007fdef3016000     76K r-x--  /usr/sbin/glusterfsd
00007fdef3228000      4K r----  /usr/sbin/glusterfsd
00007fdef3229000      8K rw---  /usr/sbin/glusterfsd
00007fdef36fb000    288K rw---    [ anon ]
00007fdef3743000 3096552K rw---    [ anon ]
00007fff23fe8000    132K rw---    [ stack ]
00007fff241f8000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

but, at was03, there are a little size [anon] file 

00007fa42b6e9000     76K r-x--  /usr/sbin/glusterfsd
00007fa42b8fb000      4K r----  /usr/sbin/glusterfsd
00007fa42b8fc000      8K rw---  /usr/sbin/glusterfsd
00007fa42c5de000    288K rw---    [ anon ]
00007fa42c626000  32804K rw---    [ anon ]


please check it ..Thanks

--- Additional comment from Vijay Bellur on 2016-01-06 04:16:18 EST ---

I don't see the sosreport attached. Can you please provide more details on the gluster volume configuration and the nature of I/O operations being performed on the client? Thanks.

--- Additional comment from hojin kim on 2016-01-13 00:07 EST ---

I will upload 2 files..

The first is sosreport-UK1-PRD-WAS01-20151202110452.tar.xz .
This server is for WAS service. and client of glusterfs file server.
In case of UK1-PRD-WAS01, glusterfs use about 5.2G memory usage.

--- Additional comment from hojin kim on 2016-01-13 00:12 EST ---

It's normal WAS system. There is no issue about memory. 
It's sosreport-UK1-PRD-WAS03-20151202112104.tar.xz

Of course, It's same env like UK1-PRD-WAS01
But, memory usage is about 0.4G.

--- Additional comment from hojin kim on 2016-01-13 00:22:37 EST ---

Hi, Vijay. 
I uploaded 2 client files, 

sosreport-UK1-PRD-WAS01-20151202110452.tar.xz . --> Memory usage of glusterfs is high 
sosreport-UK1-PRD-WAS03-20151202112104.tar.xz --> Memory usage of glusterfs is low

and glusterfs Client system is WAS system,and normally I/O was made by WAS(tomcat) client.


the configuration is as below. It;s is server env.

--------------------------------------------------------------
mount volume 

server1: UK1-PRD-FS01     UK2-PRD-FS01 ==> replicated volume0
                  |                        |
             distributed          distributed
                  |                         |
server2: UK1-PRD-FS02    UK2-PRD-FS02 ==> replicated volume1
                 | 
                 |
                 |
                 |
georeplicate  ukdr

===============================================
client  

UK1-PRD-WAS01 (attached at UK1-PRD-FS01) --> memory problem (uploaded)
UK1-PRD-WAS02 (attached at UK1-PRD-FS02) --> memory problem 
UK1-PRD-WAS03 (attached at UK1-PRD-FS01) --> no memory problem  (uploaded)
..........about 10 machines

--- Additional comment from hojin kim on 2016-02-16 23:00:43 EST ---

(In reply to Vijay Bellur from comment #2)
> I don't see the sosreport attached. Can you please provide more details on
> the gluster volume configuration and the nature of I/O operations being
> performed on the client? Thanks.

please review again. we are waiting for ur response

Comment 1 Kaushal 2017-03-08 11:02:07 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.