Hide Forgot
I am testing the small files performance using glfs-bm. The enviroment is 6 servers and one client, the client configuration file is attached. I run the following command at the client's gluster mount directory /mnt/glusterfs/benchmark: /benchmarking/testing/glfs-bm -o READ 500000 The memory usage of glusterfs process increased very quickly(use top command). after about 10 minutes, the glfs-bm is exited abnomally with the log: glusterfs-fuse: 3461837: LOOKUP() /benchmark/tmpfile.627775 => -1 (Cannot allocate memory)
(In reply to comment #0) > I am testing the small files performance using glfs-bm. The enviroment is 6 > servers and one client, the client configuration file is attached. I run the The attachment seems to be missing. Can you please attach it?
(In reply to comment #1) Also, if you have quick-read translator enabled in your client configuration, please disable it and give it a try. Please let us know the results with quick-read disabled.
Created attachment 224 [details] Patch to add RETRYCONNECT option
When the quick-read option is off, the memory usage of glusterfs process increased very slower compared with quick-read on.
Created attachment 225 [details] SRPM including RETRYCONNECT patch
I close the quick-read option, the memory usage of glusterfs increased slowly, but I think the memory usage of glusterfs is still high: after writing 500000 4096 bytes files, the memory usage is about 700M and the usage has no indication decreased.
(In reply to comment #6) Can you please do the following? 1) Can you please check memory usage after dropping caches via echo 3 > /proc/sys/vm/drop_caches This should reduce memory consumption. 2) If you do not observe it, can you please obtain a process state dump (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide contents of the mallinfo section? This would confirm if there is a memory leak.
(In reply to comment #7) > (In reply to comment #6) > Can you please do the following? > 1) Can you please check memory usage after dropping caches via > echo 3 > /proc/sys/vm/drop_caches > This should reduce memory consumption. > 2) If you do not observe it, can you please obtain a process state dump > (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide > contents of the mallinfo section? > This would confirm if there is a memory leak. In my opinion, the first suggestion has no effect because I check the real free memory through "free -m" (which plus the caches for I/O performance), the memory usage is still high after one day. I attached the glusterdump.xxx in this bug id.
(In reply to comment #7) > (In reply to comment #6) > Can you please do the following? > 1) Can you please check memory usage after dropping caches via > echo 3 > /proc/sys/vm/drop_caches > This should reduce memory consumption. > 2) If you do not observe it, can you please obtain a process state dump > (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide > contents of the mallinfo section? > This would confirm if there is a memory leak. In my opinion, the first suggestion has no effect because I check the real free memory through "free -m" (which plus the caches for I/O performance), the memory usage is still high after one day. I attached the glusterdump.xxx in this bug id.(In reply to comment #7) > (In reply to comment #6) > Can you please do the following? > 1) Can you please check memory usage after dropping caches via > echo 3 > /proc/sys/vm/drop_caches > This should reduce memory consumption. > 2) If you do not observe it, can you please obtain a process state dump > (obtained at /tmp/glusterdump.<pid> after kill -USR1 <pid>) and provide > contents of the mallinfo section? > This would confirm if there is a memory leak. Following is mallinfo you may be interested in: [mallinfo] mallinfo_arena=214708224 mallinfo_ordblks=26346 mallinfo_smblks=12 mallinfo_hblks=1 mallinfo_hblkhd=266240 mallinfo_usmblks=0 mallinfo_fsmblks=384 mallinfo_uordblks=183492240 mallinfo_fordblks=31215984 mallinfo_keepcost=126912 [iobuf.global] iobuf.global.iobuf_pool=0x91512f8 iobuf.global.iobuf_pool.page_size=131072 iobuf.global.iobuf_pool.arena_size=8388608 iobuf.global.iobuf_pool.arena_cnt=1 [iobuf.global.iobuf_pool.arena.1] iobuf.global.iobuf_pool.arena.1.mem_base=0xb7769000 iobuf.global.iobuf_pool.arena.1.active_cnt=1 iobuf.global.iobuf_pool.arena.1.passive_cnt=63 [iobuf.global.iobuf_pool.arena.1.active_iobuf.1] iobuf.global.iobuf_pool.arena.1.active_iobuf.1.ref=1 iobuf.global.iobuf_pool.arena.1.active_iobuf.1.ptr=0xb7f29000
Sorry for delay in working on this. We will make sure to address it before 3.1.1
PATCH: http://patches.gluster.com/patch/5350 in master (Remove libglusterfsclient option from gld-dm benchmarking tool)
Internal enhancement, User need not be bothered