| Summary: | gluster client encountered out of memory very quickly when running glfs-bm | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Brigtcove <brightcove> | ||||||
| Component: | unclassified | Assignee: | shishir gowda <sgowda> | ||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |||||||
| Severity: | high | Docs Contact: | |||||||
| Priority: | urgent | ||||||||
| Version: | 3.0.4 | CC: | amarts, gluster-bugs, nsathyan, vijay | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | --- | ||||||||
| Hardware: | i386 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | Type: | --- | |||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | DNR | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Attachments: |
|
||||||||
|
Description
Brigtcove
2010-05-31 03:44:56 UTC
(In reply to comment #0) > I am testing the small files performance using glfs-bm. The enviroment is 6 > servers and one client, the client configuration file is attached. I run the The attachment seems to be missing. Can you please attach it? (In reply to comment #1) Also, if you have quick-read translator enabled in your client configuration, please disable it and give it a try. Please let us know the results with quick-read disabled. Created attachment 224 [details]
Patch to add RETRYCONNECT option
When the quick-read option is off, the memory usage of glusterfs process increased very slower compared with quick-read on. Created attachment 225 [details]
SRPM including RETRYCONNECT patch
I close the quick-read option, the memory usage of glusterfs increased slowly, but I think the memory usage of glusterfs is still high: after writing 500000 4096 bytes files, the memory usage is about 700M and the usage has no indication decreased. (In reply to comment #6) Can you please do the following? 1) Can you please check memory usage after dropping caches via echo 3 > /proc/sys/vm/drop_caches This should reduce memory consumption. 2) If you do not observe it, can you please obtain a process state dump (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide contents of the mallinfo section? This would confirm if there is a memory leak. (In reply to comment #7) > (In reply to comment #6) > Can you please do the following? > 1) Can you please check memory usage after dropping caches via > echo 3 > /proc/sys/vm/drop_caches > This should reduce memory consumption. > 2) If you do not observe it, can you please obtain a process state dump > (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide > contents of the mallinfo section? > This would confirm if there is a memory leak. In my opinion, the first suggestion has no effect because I check the real free memory through "free -m" (which plus the caches for I/O performance), the memory usage is still high after one day. I attached the glusterdump.xxx in this bug id. (In reply to comment #7) > (In reply to comment #6) > Can you please do the following? > 1) Can you please check memory usage after dropping caches via > echo 3 > /proc/sys/vm/drop_caches > This should reduce memory consumption. > 2) If you do not observe it, can you please obtain a process state dump > (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide > contents of the mallinfo section? > This would confirm if there is a memory leak. In my opinion, the first suggestion has no effect because I check the real free memory through "free -m" (which plus the caches for I/O performance), the memory usage is still high after one day. I attached the glusterdump.xxx in this bug id.(In reply to comment #7) > (In reply to comment #6) > Can you please do the following? > 1) Can you please check memory usage after dropping caches via > echo 3 > /proc/sys/vm/drop_caches > This should reduce memory consumption. > 2) If you do not observe it, can you please obtain a process state dump > (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide > contents of the mallinfo section? > This would confirm if there is a memory leak. Following is mallinfo you may be interested in: [mallinfo] mallinfo_arena=214708224 mallinfo_ordblks=26346 mallinfo_smblks=12 mallinfo_hblks=1 mallinfo_hblkhd=266240 mallinfo_usmblks=0 mallinfo_fsmblks=384 mallinfo_uordblks=183492240 mallinfo_fordblks=31215984 mallinfo_keepcost=126912 [iobuf.global] iobuf.global.iobuf_pool=0x91512f8 iobuf.global.iobuf_pool.page_size=131072 iobuf.global.iobuf_pool.arena_size=8388608 iobuf.global.iobuf_pool.arena_cnt=1 [iobuf.global.iobuf_pool.arena.1] iobuf.global.iobuf_pool.arena.1.mem_base=0xb7769000 iobuf.global.iobuf_pool.arena.1.active_cnt=1 iobuf.global.iobuf_pool.arena.1.passive_cnt=63 [iobuf.global.iobuf_pool.arena.1.active_iobuf.1] iobuf.global.iobuf_pool.arena.1.active_iobuf.1.ref=1 iobuf.global.iobuf_pool.arena.1.active_iobuf.1.ptr=0xb7f29000 Sorry for delay in working on this. We will make sure to address it before 3.1.1 PATCH: http://patches.gluster.com/patch/5350 in master (Remove libglusterfsclient option from gld-dm benchmarking tool) Internal enhancement, User need not be bothered |