Bug 1289442
Summary: | high memory usage on client node | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | hojin kim <khoj> | ||||||
Component: | fuse | Assignee: | Vijay Bellur <vbellur> | ||||||
Status: | CLOSED WORKSFORME | QA Contact: | |||||||
Severity: | high | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | mainline | CC: | atumball, bugs, khoj | ||||||
Target Milestone: | --- | Keywords: | Triaged | ||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | glusterfs-6.x | Doc Type: | Bug Fix | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | |||||||||
: | 1371544 1371547 (view as bug list) | Environment: | |||||||
Last Closed: | 2019-05-09 10:01:34 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1371544, 1371547 | ||||||||
Attachments: |
|
Description
hojin kim
2015-12-08 06:47:05 UTC
I uploaded the sosreport.and 3 txt files.. It's data for client was01, was02 have very big [anon] area(about 3GB) by pmap command was01 54 [anon] area 00007f6d49c04000 76K r-x-- /usr/sbin/glusterfsd 00007f6d49e16000 4K r---- /usr/sbin/glusterfsd 00007f6d49e17000 8K rw--- /usr/sbin/glusterfsd 00007f6d4b33d000 288K rw--- [ anon ] 00007f6d4b385000 3450888K rw--- [ anon ] 00007fff26b32000 132K rw--- [ stack ] 00007fff26bf3000 4K r-x-- [ anon ] ffffffffff600000 4K r-x-- [ anon ] was02 00007fdef3016000 76K r-x-- /usr/sbin/glusterfsd 00007fdef3228000 4K r---- /usr/sbin/glusterfsd 00007fdef3229000 8K rw--- /usr/sbin/glusterfsd 00007fdef36fb000 288K rw--- [ anon ] 00007fdef3743000 3096552K rw--- [ anon ] 00007fff23fe8000 132K rw--- [ stack ] 00007fff241f8000 4K r-x-- [ anon ] ffffffffff600000 4K r-x-- [ anon ] but, at was03, there are a little size [anon] file 00007fa42b6e9000 76K r-x-- /usr/sbin/glusterfsd 00007fa42b8fb000 4K r---- /usr/sbin/glusterfsd 00007fa42b8fc000 8K rw--- /usr/sbin/glusterfsd 00007fa42c5de000 288K rw--- [ anon ] 00007fa42c626000 32804K rw--- [ anon ] please check it ..Thanks I don't see the sosreport attached. Can you please provide more details on the gluster volume configuration and the nature of I/O operations being performed on the client? Thanks. Created attachment 1114275 [details]
It's client's sosreport in suffering memory leak issue (WAS system)
I will upload 2 files..
The first is sosreport-UK1-PRD-WAS01-20151202110452.tar.xz .
This server is for WAS service. and client of glusterfs file server.
In case of UK1-PRD-WAS01, glusterfs use about 5.2G memory usage.
Created attachment 1114276 [details]
It's client's sosreport without memory leak issue (WAS system)
It's normal WAS system. There is no issue about memory.
It's sosreport-UK1-PRD-WAS03-20151202112104.tar.xz
Of course, It's same env like UK1-PRD-WAS01
But, memory usage is about 0.4G.
Hi, Vijay. I uploaded 2 client files, sosreport-UK1-PRD-WAS01-20151202110452.tar.xz . --> Memory usage of glusterfs is high sosreport-UK1-PRD-WAS03-20151202112104.tar.xz --> Memory usage of glusterfs is low and glusterfs Client system is WAS system,and normally I/O was made by WAS(tomcat) client. the configuration is as below. It;s is server env. -------------------------------------------------------------- mount volume server1: UK1-PRD-FS01 UK2-PRD-FS01 ==> replicated volume0 | | distributed distributed | | server2: UK1-PRD-FS02 UK2-PRD-FS02 ==> replicated volume1 | | | | georeplicate ukdr =============================================== client UK1-PRD-WAS01 (attached at UK1-PRD-FS01) --> memory problem (uploaded) UK1-PRD-WAS02 (attached at UK1-PRD-FS02) --> memory problem UK1-PRD-WAS03 (attached at UK1-PRD-FS01) --> no memory problem (uploaded) ..........about 10 machines (In reply to Vijay Bellur from comment #2) > I don't see the sosreport attached. Can you please provide more details on > the gluster volume configuration and the nature of I/O operations being > performed on the client? Thanks. please review again. we are waiting for ur response With fixing https://bugzilla.redhat.com/show_bug.cgi?id=1560969 we find that these issues are now fixed. |