Description of problem: high memory leak on client node on ubuntu 12.04 TLS Description of problem: client glusterfs daemon use high memory at glusterfs 3.6.3 I saw the pmap data pmap (glusterfs pid) | grep anon 00007f6cd0000000 131072K rw--- [ anon ] 00007f6cd8000000 131072K rw--- [ anon ] 00007f6ce0000000 131072K rw--- [ anon ] 00007f6ce8000000 131072K rw--- [ anon ] 00007f6cf0000000 131072K rw--- [ anon ] 00007f6cf8000000 131072K rw--- [ anon ] 00007f6d00000000 131072K rw--- [ anon ] 00007f6d08000000 131072K rw--- [ anon ] ... 00007f6d4b385000 3450888K rw--- [ anon ] There are many memory usage by anon I think it;s memory leak. Steps to Reproduce: no Actual results: no Expected results: Additional info: I gathered sosreport, pmap, /proc/gluster pid/status, lsof .
I uploaded the sosreport.and 3 txt files.. It's data for client was01, was02 have very big [anon] area(about 3GB) by pmap command was01 54 [anon] area 00007f6d49c04000 76K r-x-- /usr/sbin/glusterfsd 00007f6d49e16000 4K r---- /usr/sbin/glusterfsd 00007f6d49e17000 8K rw--- /usr/sbin/glusterfsd 00007f6d4b33d000 288K rw--- [ anon ] 00007f6d4b385000 3450888K rw--- [ anon ] 00007fff26b32000 132K rw--- [ stack ] 00007fff26bf3000 4K r-x-- [ anon ] ffffffffff600000 4K r-x-- [ anon ] was02 00007fdef3016000 76K r-x-- /usr/sbin/glusterfsd 00007fdef3228000 4K r---- /usr/sbin/glusterfsd 00007fdef3229000 8K rw--- /usr/sbin/glusterfsd 00007fdef36fb000 288K rw--- [ anon ] 00007fdef3743000 3096552K rw--- [ anon ] 00007fff23fe8000 132K rw--- [ stack ] 00007fff241f8000 4K r-x-- [ anon ] ffffffffff600000 4K r-x-- [ anon ] but, at was03, there are a little size [anon] file 00007fa42b6e9000 76K r-x-- /usr/sbin/glusterfsd 00007fa42b8fb000 4K r---- /usr/sbin/glusterfsd 00007fa42b8fc000 8K rw--- /usr/sbin/glusterfsd 00007fa42c5de000 288K rw--- [ anon ] 00007fa42c626000 32804K rw--- [ anon ] please check it ..Thanks
I don't see the sosreport attached. Can you please provide more details on the gluster volume configuration and the nature of I/O operations being performed on the client? Thanks.
Created attachment 1114275 [details] It's client's sosreport in suffering memory leak issue (WAS system) I will upload 2 files.. The first is sosreport-UK1-PRD-WAS01-20151202110452.tar.xz . This server is for WAS service. and client of glusterfs file server. In case of UK1-PRD-WAS01, glusterfs use about 5.2G memory usage.
Created attachment 1114276 [details] It's client's sosreport without memory leak issue (WAS system) It's normal WAS system. There is no issue about memory. It's sosreport-UK1-PRD-WAS03-20151202112104.tar.xz Of course, It's same env like UK1-PRD-WAS01 But, memory usage is about 0.4G.
Hi, Vijay. I uploaded 2 client files, sosreport-UK1-PRD-WAS01-20151202110452.tar.xz . --> Memory usage of glusterfs is high sosreport-UK1-PRD-WAS03-20151202112104.tar.xz --> Memory usage of glusterfs is low and glusterfs Client system is WAS system,and normally I/O was made by WAS(tomcat) client. the configuration is as below. It;s is server env. -------------------------------------------------------------- mount volume server1: UK1-PRD-FS01 UK2-PRD-FS01 ==> replicated volume0 | | distributed distributed | | server2: UK1-PRD-FS02 UK2-PRD-FS02 ==> replicated volume1 | | | | georeplicate ukdr =============================================== client UK1-PRD-WAS01 (attached at UK1-PRD-FS01) --> memory problem (uploaded) UK1-PRD-WAS02 (attached at UK1-PRD-FS02) --> memory problem UK1-PRD-WAS03 (attached at UK1-PRD-FS01) --> no memory problem (uploaded) ..........about 10 machines
(In reply to Vijay Bellur from comment #2) > I don't see the sosreport attached. Can you please provide more details on > the gluster volume configuration and the nature of I/O operations being > performed on the client? Thanks. please review again. we are waiting for ur response
With fixing https://bugzilla.redhat.com/show_bug.cgi?id=1560969 we find that these issues are now fixed.