Bug 1289442 - high memory usage on client node
high memory usage on client node
Status: NEW
Product: GlusterFS
Classification: Community
Component: fuse (Show other bugs)
mainline
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Vijay Bellur
: Triaged
Depends On:
Blocks: 1371544 1371547
  Show dependency treegraph
 
Reported: 2015-12-08 01:47 EST by hojin kim
Modified: 2016-08-30 08:55 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1371544 1371547 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
It's client's sosreport in suffering memory leak issue (WAS system) (898.73 KB, application/x-xz)
2016-01-13 00:07 EST, hojin kim
no flags Details
It's client's sosreport without memory leak issue (WAS system) (885.32 KB, application/x-xz)
2016-01-13 00:12 EST, hojin kim
no flags Details

  None (edit)
Description hojin kim 2015-12-08 01:47:05 EST
Description of problem:
high memory leak on client node on ubuntu 12.04 TLS

Description of problem:

client glusterfs daemon use high memory at glusterfs 3.6.3

I saw the pmap data

pmap (glusterfs pid) | grep anon

00007f6cd0000000 131072K rw---    [ anon ]
00007f6cd8000000 131072K rw---    [ anon ]
00007f6ce0000000 131072K rw---    [ anon ]
00007f6ce8000000 131072K rw---    [ anon ]
00007f6cf0000000 131072K rw---    [ anon ]
00007f6cf8000000 131072K rw---    [ anon ]
00007f6d00000000 131072K rw---    [ anon ]
00007f6d08000000 131072K rw---    [ anon ]
... 
00007f6d4b385000 3450888K rw---    [ anon ]

There are many memory usage by anon 

I think it;s memory leak. 

Steps to Reproduce:

no

Actual results:

no

Expected results:


Additional info:

I gathered sosreport, pmap, /proc/gluster pid/status, lsof .
Comment 1 hojin kim 2015-12-08 21:10:53 EST
I uploaded the sosreport.and 3 txt files..
It's data for client
was01, was02 have very big [anon] area(about 3GB) by pmap command

was01
                        54 [anon] area 
00007f6d49c04000     76K r-x--  /usr/sbin/glusterfsd
00007f6d49e16000      4K r----  /usr/sbin/glusterfsd
00007f6d49e17000      8K rw---  /usr/sbin/glusterfsd
00007f6d4b33d000    288K rw---    [ anon ]
00007f6d4b385000 3450888K rw---    [ anon ]
00007fff26b32000    132K rw---    [ stack ]
00007fff26bf3000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

was02
00007fdef3016000     76K r-x--  /usr/sbin/glusterfsd
00007fdef3228000      4K r----  /usr/sbin/glusterfsd
00007fdef3229000      8K rw---  /usr/sbin/glusterfsd
00007fdef36fb000    288K rw---    [ anon ]
00007fdef3743000 3096552K rw---    [ anon ]
00007fff23fe8000    132K rw---    [ stack ]
00007fff241f8000      4K r-x--    [ anon ]
ffffffffff600000      4K r-x--    [ anon ]

but, at was03, there are a little size [anon] file 

00007fa42b6e9000     76K r-x--  /usr/sbin/glusterfsd
00007fa42b8fb000      4K r----  /usr/sbin/glusterfsd
00007fa42b8fc000      8K rw---  /usr/sbin/glusterfsd
00007fa42c5de000    288K rw---    [ anon ]
00007fa42c626000  32804K rw---    [ anon ]


please check it ..Thanks
Comment 2 Vijay Bellur 2016-01-06 04:16:18 EST
I don't see the sosreport attached. Can you please provide more details on the gluster volume configuration and the nature of I/O operations being performed on the client? Thanks.
Comment 3 hojin kim 2016-01-13 00:07 EST
Created attachment 1114275 [details]
It's client's sosreport in suffering memory leak issue  (WAS system)

I will upload 2 files..

The first is sosreport-UK1-PRD-WAS01-20151202110452.tar.xz .
This server is for WAS service. and client of glusterfs file server.
In case of UK1-PRD-WAS01, glusterfs use about 5.2G memory usage.
Comment 4 hojin kim 2016-01-13 00:12 EST
Created attachment 1114276 [details]
It's client's sosreport without memory leak issue (WAS system)

It's normal WAS system. There is no issue about memory. 
It's sosreport-UK1-PRD-WAS03-20151202112104.tar.xz

Of course, It's same env like UK1-PRD-WAS01
But, memory usage is about 0.4G.
Comment 5 hojin kim 2016-01-13 00:22:37 EST
Hi, Vijay. 
I uploaded 2 client files, 

sosreport-UK1-PRD-WAS01-20151202110452.tar.xz . --> Memory usage of glusterfs is high 
sosreport-UK1-PRD-WAS03-20151202112104.tar.xz --> Memory usage of glusterfs is low

and glusterfs Client system is WAS system,and normally I/O was made by WAS(tomcat) client.


the configuration is as below. It;s is server env.

--------------------------------------------------------------
mount volume 

server1: UK1-PRD-FS01     UK2-PRD-FS01 ==> replicated volume0
                  |                        |
             distributed          distributed
                  |                         |
server2: UK1-PRD-FS02    UK2-PRD-FS02 ==> replicated volume1
                 | 
                 |
                 |
                 |
georeplicate  ukdr

===============================================
client  

UK1-PRD-WAS01 (attached at UK1-PRD-FS01) --> memory problem (uploaded)
UK1-PRD-WAS02 (attached at UK1-PRD-FS02) --> memory problem 
UK1-PRD-WAS03 (attached at UK1-PRD-FS01) --> no memory problem  (uploaded)
..........about 10 machines
Comment 6 hojin kim 2016-02-16 23:00:43 EST
(In reply to Vijay Bellur from comment #2)
> I don't see the sosreport attached. Can you please provide more details on
> the gluster volume configuration and the nature of I/O operations being
> performed on the client? Thanks.

please review again. we are waiting for ur response

Note You need to log in before you can comment on or make changes to this bug.