Description of problem: Replica 2 perfomance max is 500MB/s Version-Release number of selected component (if applicable): 2 equal servers: - E5-2690 x 2 - 256 GB 1600 - 40GBS infiniband link (Cx-2) p2p link. Connected mode. How reproducible: Create 2 RAM disk and create replica 2 volume. mount volume (TCP only, RDMA only). Try to write. Only 400MB/s Steps to Reproduce: on All servers: - mkdir /mnt/ram - mount -t tmpfs -o size=1024M tmpfs /tmp/ram - mkdir /mnt/ram/b1 gluster vol create ram replica 2 [transport...] s1:/mnt/ram/b1 s2:/mnt/ram/b1 gluster vol start ram mkdir /tmp/glu mkdir /tmp/ram mount -t tmpfs -o size=1024M tmpfs /tmp/ram mount -t glusterfs [-o transport=rdma] s1:/ram /tmp/glu dd if=/dev/sda of=/tmp/ram/0.bin bs=128M count=5 #prepare data dd if=/tmp/ram/0.bin of=/tmp/glu/0.bin bs=128M Result - 400-500MB/s I was try do TCP tunning, use RDMA or TCP only, turn off all perfomance params in vol. I can't get more 500MB/s. Ram disk read/write have 8GB/s P.S. Many times try read from vol. Read is 500MB/s, 2GB/s, 2GB. I have BIG and GREAT waits on write on my cluster! In production i use ovirt hypercovered (3 servers, RAID0 8xSSD 1TB, Replica 3 arbiter 1) Actual results: Max write speed is 500MB/s Expected results: Write speed must be 6-8 GB/s
With write window = 1GB [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=128M скопировано 536870912 байт (537 MB), 0,766988 c, 700 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=128M скопировано 536870912 байт (537 MB), 0,807878 c, 665 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=128M скопировано 536870912 байт (537 MB), 0,861966 c, 623 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=128M скопировано 536870912 байт (537 MB), 0,798036 c, 673 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=128M скопировано 536870912 байт (537 MB), 1,59656 c, 336 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=128M скопировано 536870912 байт (537 MB), 1,30761 c, 411 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=128M скопировано 536870912 байт (537 MB), 1,23974 c, 433 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=512M скопировано 536870912 байт (537 MB), 1,89813 c, 283 MB/c [root@xintel1 cmdline]# dd if=/tmp/1/1.bin of=/tmp/0/1.bin bs=512M скопировано 536870912 байт (537 MB), 1,0454 c, 514 MB/c
with nfs mount to gNFS i get 300MB/s
Brick: intel2:/mnt/ram/b1 ------------------------- Cumulative Stats: Block Size: 131072b+ No. of Reads: 40 No. of Writes: 28672 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 8 RELEASE 0.00 0.00 us 0.00 us 0.00 us 6 RELEASEDIR 0.00 2.83 us 1.00 us 8.00 us 6 OPENDIR 0.00 19.43 us 14.00 us 25.00 us 7 FLUSH 0.00 35.50 us 22.00 us 46.00 us 4 STATFS 0.01 63.86 us 48.00 us 101.00 us 7 REMOVEXATTR 0.02 45.06 us 11.00 us 129.00 us 16 GETXATTR 0.03 102.88 us 77.00 us 123.00 us 8 OPEN 0.05 73.84 us 33.00 us 131.00 us 19 FSTAT 0.05 26.52 us 13.00 us 91.00 us 58 FINODELK 0.07 128.56 us 103.00 us 166.00 us 16 XATTROP 0.11 280.33 us 123.00 us 945.00 us 12 READDIR 0.16 118.03 us 33.00 us 2967.00 us 40 READ 0.48 141.82 us 38.00 us 458.00 us 102 FXATTROP 0.91 555.98 us 6.00 us 21644.00 us 50 LOOKUP 5.09 18.80 us 11.00 us 1050.00 us 8228 INODELK 15.93 69172.43 us 65170.00 us 84091.00 us 7 TRUNCATE 77.09 81.75 us 56.00 us 5169.00 us 28672 WRITE Duration: 126 seconds Data Read: 5242880 bytes Data Written: 3758096384 bytes Interval 1 Stats: Block Size: 131072b+ No. of Reads: 0 No. of Writes: 20480 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 5 RELEASE 0.00 17.60 us 14.00 us 21.00 us 5 FLUSH 0.01 58.80 us 48.00 us 79.00 us 5 REMOVEXATTR 0.02 97.80 us 77.00 us 116.00 us 5 OPEN 0.04 39.10 us 19.00 us 66.00 us 20 INODELK 0.05 74.71 us 38.00 us 119.00 us 14 FSTAT 0.05 26.93 us 13.00 us 91.00 us 42 FINODELK 0.06 124.50 us 103.00 us 162.00 us 10 XATTROP 0.06 129.80 us 77.00 us 190.00 us 10 LOOKUP 0.54 148.45 us 38.00 us 458.00 us 76 FXATTROP 16.84 70477.40 us 65170.00 us 84091.00 us 5 TRUNCATE 82.32 84.12 us 59.00 us 5169.00 us 20480 WRITE Duration: 68 seconds Data Read: 0 bytes Data Written: 2684354560 bytes
In RDMA Only mode: Brick: intel2:/mnt/ram/b1 ------------------------- Cumulative Stats: Block Size: 131072b+ No. of Reads: 0 No. of Writes: 32768 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 8 RELEASE 0.00 0.00 us 0.00 us 0.00 us 3 RELEASEDIR 0.00 2.00 us 1.00 us 3.00 us 3 OPENDIR 0.00 36.00 us 28.00 us 49.00 us 3 STATFS 0.01 21.12 us 15.00 us 31.00 us 8 FLUSH 0.01 29.44 us 10.00 us 68.00 us 9 GETXATTR 0.02 59.75 us 50.00 us 73.00 us 8 REMOVEXATTR 0.03 112.25 us 83.00 us 134.00 us 8 OPEN 0.03 29.22 us 14.00 us 64.00 us 32 FINODELK 0.04 95.69 us 53.00 us 135.00 us 13 FSTAT 0.05 47.22 us 18.00 us 135.00 us 32 INODELK 0.05 277.50 us 80.00 us 884.00 us 6 READDIR 0.08 145.50 us 116.00 us 211.00 us 16 XATTROP 0.12 90.38 us 7.00 us 200.00 us 39 LOOKUP 0.28 108.29 us 37.00 us 452.00 us 78 FXATTROP 17.60 66873.50 us 64404.00 us 70025.00 us 8 TRUNCATE 81.69 75.79 us 56.00 us 1128.00 us 32768 WRITE Duration: 92 seconds Data Read: 0 bytes Data Written: 4294967296 bytes Interval 2 Stats: Block Size: 131072b+ No. of Reads: 0 No. of Writes: 16384 %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop --------- ----------- ----------- ----------- ------------ ---- 0.00 0.00 us 0.00 us 0.00 us 4 RELEASE 0.00 17.25 us 15.00 us 19.00 us 4 FLUSH 0.01 52.00 us 50.00 us 54.00 us 4 REMOVEXATTR 0.02 28.75 us 20.00 us 57.00 us 12 FINODELK 0.03 106.75 us 83.00 us 127.00 us 4 OPEN 0.04 94.83 us 53.00 us 118.00 us 6 FSTAT 0.04 37.69 us 18.00 us 67.00 us 16 INODELK 0.06 118.75 us 67.00 us 173.00 us 8 LOOKUP 0.07 133.88 us 116.00 us 150.00 us 8 XATTROP 0.19 92.03 us 37.00 us 405.00 us 32 FXATTROP 18.00 68204.50 us 66035.00 us 70025.00 us 4 TRUNCATE 81.53 75.42 us 56.00 us 223.00 us 16384 WRITE Duration: 18 seconds Data Read: 0 bytes Data Written: 2147483648 bytes
PERF in RDMA mode: Samples: 1K of event 'cycles', Event count (approx.): 4212235834 Children Self Command Shared Object Symbol ◆ + 52,99% 0,07% glusterfsd [kernel.kallsyms] [k] system_call_fastpath ▒ + 29,23% 0,00% glusterfsd libpthread-2.17.so [.] 0xffff80dd2cab9af3 ▒ + 29,23% 0,00% glusterfsd [kernel.kallsyms] [k] sys_pwrite64 ▒ + 29,23% 0,08% glusterfsd [kernel.kallsyms] [k] do_sync_write ▒ + 29,14% 0,00% glusterfsd [kernel.kallsyms] [k] vfs_write ▒ + 29,14% 0,00% glusterfsd [kernel.kallsyms] [k] generic_file_aio_write ▒ + 28,77% 0,00% glusterfsd [kernel.kallsyms] [k] __generic_file_aio_write ▒ + 28,58% 0,10% glusterfsd [kernel.kallsyms] [k] generic_file_buffered_write ▒ + 25,43% 0,00% glusterfsd [unknown] [.] 0000000000000000 ▒ + 18,64% 0,00% glusterfsd [kernel.kallsyms] [k] shmem_write_begin ▒ + 18,57% 0,62% glusterfsd [kernel.kallsyms] [k] shmem_getpage_gfp ▒ + 16,32% 0,00% glusterfsd [unknown] [.] 0x0000baadf00d0068 ▒ + 16,32% 0,00% glusterfsd [unknown] [.] 0x00007f22c0009460 ▒ + 16,32% 0,00% glusterfsd trash.so [.] 0xffff80dd39d87063 ▒ + 16,32% 0,00% glusterfsd libc-2.17.so [.] truncate ▒ + 16,32% 0,00% glusterfsd [kernel.kallsyms] [k] sys_truncate ▒ + 16,32% 0,00% glusterfsd [kernel.kallsyms] [k] do_sys_truncate ▒ + 16,32% 0,00% glusterfsd [kernel.kallsyms] [k] vfs_truncate ▒ + 16,32% 0,00% glusterfsd [kernel.kallsyms] [k] do_truncate ▒ + 16,32% 0,00% glusterfsd [kernel.kallsyms] [k] notify_change ▒ + 16,32% 0,00% glusterfsd [kernel.kallsyms] [k] shmem_setattr ▒ + 16,32% 0,00% glusterfsd [kernel.kallsyms] [k] shmem_truncate_range ▒ + 16,32% 0,23% glusterfsd [kernel.kallsyms] [k] shmem_undo_range ▒ + 7,84% 7,76% glusterfsd libpthread-2.17.so [.] pthread_spin_lock ▒ + 7,71% 0,17% glusterfsd [kernel.kallsyms] [k] shmem_alloc_page ▒ + 7,45% 0,00% glusterfsd [kernel.kallsyms] [k] alloc_pages_vma ▒ + 7,39% 0,30% glusterfsd [kernel.kallsyms] [k] truncate_inode_page ▒ + 7,34% 0,88% glusterfsd [kernel.kallsyms] [k] release_pages ▒ + 7,19% 0,00% glusterfsd [kernel.kallsyms] [k] __pagevec_release ▒ + 6,89% 0,68% glusterfsd [kernel.kallsyms] [k] __alloc_pages_nodemask ▒ + 6,70% 0,16% glusterfsd [kernel.kallsyms] [k] delete_from_page_cache ▒ + 6,42% 6,42% glusterfsd [kernel.kallsyms] [k] copy_user_generic_string ▒ + 5,84% 0,16% glusterfsd [kernel.kallsyms] [k] free_hot_cold_page_list ▒ + 5,71% 1,95% glusterfsd [kernel.kallsyms] [k] get_page_from_freelist ▒ + 5,61% 0,28% glusterfsd [kernel.kallsyms] [k] free_hot_cold_page ▒ + 4,74% 0,00% glusterfsd [kernel.kallsyms] [k] mem_cgroup_uncharge_cache_page ▒ + 4,67% 4,31% glusterfsd [kernel.kallsyms] [k] __mem_cgroup_uncharge_common ▒ + 4,60% 0,00% glusterfsd libmlx4-rdmav2.so [.] 0xffff80dd426db780 ▒ + 4,59% 0,07% glusterfsd [kernel.kallsyms] [k] mem_cgroup_cache_charge ▒ + 4,46% 0,00% glusterfsd [kernel.kallsyms] [k] mem_cgroup_charge_common ▒ + 4,38% 3,21% glusterfsd [kernel.kallsyms] [k] free_pcppages_bulk ▒ + 4,32% 3,81% glusterfsd [kernel.kallsyms] [k] __mem_cgroup_commit_charge ▒ + 4,20% 0,00% glusterfsd [unknown] [.] 0x00007f22b0003d50 ▒ + 3,59% 3,59% glusterfsd [kernel.kallsyms] [k] __list_del_entry ▒ + 3,56% 0,06% glusterfsd [kernel.kallsyms] [k] list_del ▒ + 3,35% 0,00% glusterfsd libpthread-2.17.so [.] 0xffff80dd2cab922d Perf in TCP Mode: Samples: 9K of event 'cycles', Event count (approx.): 29918011566 Children Self Command Shared Object Symbol + 60,45% 0,04% glusterfsd [kernel.kallsyms] [k] system_call_fastpath + 25,89% 0,00% glusterfsd [unknown] [.] 0000000000000000 + 22,86% 0,10% glusterfsd [kernel.kallsyms] [k] do_readv_writev + 22,33% 0,07% glusterfsd [kernel.kallsyms] [k] do_sync_readv_writev + 21,95% 0,00% glusterfsd libpthread-2.17.so [.] 0xffff80a668a10af3 + 21,92% 0,04% glusterfsd [kernel.kallsyms] [k] sys_pwrite64 + 21,85% 0,08% glusterfsd [kernel.kallsyms] [k] vfs_write + 21,47% 0,03% glusterfsd [kernel.kallsyms] [k] do_sync_write + 21,40% 0,04% glusterfsd [kernel.kallsyms] [k] generic_file_aio_write + 21,35% 0,03% glusterfsd [kernel.kallsyms] [k] __generic_file_aio_write + 21,00% 0,13% glusterfsd [kernel.kallsyms] [k] generic_file_buffered_write + 15,55% 0,14% glusterfsd libc-2.17.so [.] __libc_readv + 15,17% 0,07% glusterfsd [kernel.kallsyms] [k] sys_readv + 15,07% 0,00% glusterfsd [kernel.kallsyms] [k] vfs_readv + 14,65% 0,03% glusterfsd [kernel.kallsyms] [k] sock_aio_read + 14,59% 0,04% glusterfsd [kernel.kallsyms] [k] sock_aio_read.part.7 + 14,48% 0,02% glusterfsd [kernel.kallsyms] [k] inet_recvmsg + 14,44% 0,24% glusterfsd [kernel.kallsyms] [k] tcp_recvmsg + 11,92% 0,11% glusterfsd [kernel.kallsyms] [k] shmem_write_begin + 11,85% 11,02% glusterfsd libpthread-2.17.so [.] pthread_spin_lock + 11,70% 0,74% glusterfsd [kernel.kallsyms] [k] shmem_getpage_gfp + 11,50% 11,50% glusterfsd [kernel.kallsyms] [k] copy_user_generic_string + 10,85% 0,00% glusterfsd [unknown] [.] 0x0000000000000004 + 9,88% 0,16% glusterfsd [kernel.kallsyms] [k] tcp_transmit_skb + 9,49% 0,04% glusterfsd [kernel.kallsyms] [k] ip_queue_xmit + 9,34% 0,00% glusterfsd [kernel.kallsyms] [k] ip_local_out_sk + 8,37% 0,00% glusterfsd [unknown] [.] 0x000000000001fcbc + 8,14% 0,03% glusterfsd libc-2.17.so [.] __libc_writev + 7,93% 0,08% glusterfsd [kernel.kallsyms] [k] sys_writev + 7,83% 0,00% glusterfsd [kernel.kallsyms] [k] vfs_writev + 7,59% 0,05% glusterfsd [kernel.kallsyms] [k] sock_aio_write + 7,54% 0,01% glusterfsd [kernel.kallsyms] [k] inet_sendmsg + 7,45% 0,04% glusterfsd [kernel.kallsyms] [k] intel_map_page + 7,40% 0,16% glusterfsd [kernel.kallsyms] [k] tcp_sendmsg + 7,31% 0,06% glusterfsd [kernel.kallsyms] [k] __intel_map_single + 7,26% 0,04% glusterfsd [kernel.kallsyms] [k] ip_output event_analyzing_sample: In trace_end: ▒ ▒ There is 13548 records in gen_events table ▒ Statistics about the general events grouped by thread/symbol/dso: ▒ ▒ ▒ comm number histogram ▒ ========================================== ▒ glusterfsd 13548 ############## ▒ ▒ symbol number histogram ▒ ========================================================== ▒ native_write_msr_safe 1764 ########### ▒ pthread_spin_lock 1211 ########### ▒ copy_user_generic_string 1173 ########### ▒ Unknown_symbol 658 ########## ▒ __list_del_entry 316 ######### ▒ _raw_spin_lock_irqsave 277 ######### ▒ _gf_msg 275 ######### ▒ rb_prev 270 ######### ▒ __mem_cgroup_uncharge_common 267 ######### ▒ free_pcppages_bulk 230 ######## ▒ __mem_cgroup_commit_charge 225 ######## ▒ mark_page_accessed 165 ######## ▒ put_page 125 ####### ▒ get_pageblock_flags_group 124 ####### ▒ vfprintf 122 ####### ▒ __radix_tree_lookup 119 ####### ▒ __memset_sse2 114 ####### ▒ __glusterfs_this_location 110 ####### ▒ _int_malloc 98 ####### ▒ _raw_spin_lock 91 ####### ▒ get_page_from_freelist 87 ####### ▒ release_pages 77 ####### ▒ do_csum 74 ####### ▒ __find_get_pages 71 ####### ▒ pthread_getspecific 64 ####### ▒ shmem_getpage_gfp 63 ###### ▒ lookup_page_cgroup 60 ###### ▒ pthread_mutex_lock 59 ###### ▒ _int_free 55 ###### ▒ alloc_iova 54 ###### ▒ mem_get 54 ###### ▒ __list_add 53 ###### ▒ finish_task_switch 50 ###### ▒ __alloc_pages_nodemask 49 ###### ▒ __memcpy_ssse3_back 49 ###### ▒ gf_mem_set_acct_info 49 ###### ▒ _raw_spin_lock_irq 47 ###### ▒ free_hot_cold_page 47 ###### ▒ __gf_free 45 ###### ▒ __mod_zone_page_state 45 ###### ▒ pthread_mutex_unlock 44 ###### ▒ page_waitqueue 43 ###### ▒ __wake_up_bit 41 ###### ▒ __libc_calloc 40 ###### ▒ kfree 40 ###### ▒ __inc_zone_state 39 ###### ▒ ipt_do_table 38 ######
[root@xintel2 ~]# perf stat -a -g -p 110232 ^C Performance counter stats for process id '110232': 33604,217420 task-clock (msec) # 3,467 CPUs utilized (1,90%) 444 787 context-switches # 0,013 M/sec (1,90%) 39 759 cpu-migrations # 0,001 M/sec (1,90%) 0 page-faults # 0,000 K/sec (1,90%) 12 270 079 483 cycles # 0,365 GHz (1,90%) 6 796 842 214 stalled-cycles-frontend # 55,39% frontend cycles idle (0,01%) 5 270 033 500 stalled-cycles-backend # 42,95% backend cycles idle (0,01%) 2 911 092 877 instructions # 0,24 insns per cycle # 2,33 stalled cycles per insn (0,01%) 394 494 858 branches # 11,739 M/sec (0,00%) <not counted> branch-misses (0,00%) 9,692958135 seconds time elapsed
Detalid TCP mode 1 file DD test on node2: - 62,39% 0,01% glusterfsd [k] system_call_fastpath ▒ - system_call_fastpath ▒ 35,63% 0xeaf3 ▒ + 26,82% __libc_readv ▒ + 12,85% __libc_writev ◆ + 11,56% truncate ▒ + 3,74% 0xf72c3 ▒ + 2,58% pthread_cond_timedwait@@GLIBC_2.3.2 ▒ + 2,32% fgetxattr ▒ + 1,07% __lll_lock_wait ▒ + 1,07% pthread_cond_signal@@GLIBC_2.3.2 ▒ + 0,86% __lll_unlock_wake ▒ + 0,76% __fxstat64 ▒ + 0,56% epoll_ctl ▒ - 24,22% 0,08% glusterfsd [k] do_readv_writev ▒ - do_readv_writev ▒ + 67,43% vfs_readv ▒ + 32,54% vfs_writev ▒ - 23,75% 0,14% glusterfsd [k] do_sync_readv_writev ▒ - do_sync_readv_writev ▒ - do_readv_writev ▒ + 67,54% vfs_readv ▒ + 32,46% vfs_writev ▒ - 22,21% 0,00% glusterfsd [k] sys_pwrite64 ▒ sys_pwrite64 ▒ system_call_fastpath ▒ 0xeaf3 ▒ - 22,17% 0,03% glusterfsd [k] vfs_write ▒ vfs_write ▒ sys_pwrite64 ▒ system_call_fastpath ▒ 0xeaf3 ▒ - 21,99% 0,03% glusterfsd [k] do_sync_write ▒ - do_sync_write ▒ + vfs_write ▒ - 21,89% 0,05% glusterfsd [k] generic_file_aio_write ▒ generic_file_aio_write ▒ do_sync_write ▒ vfs_write ▒ sys_pwrite64 ▒ system_call_fastpath ▒ 0xeaf3 ▒ - 21,79% 0,00% glusterfsd [k] __generic_file_aio_write ▒ __generic_file_aio_write ▒ generic_file_aio_write ▒ do_sync_write ▒ vfs_write ▒ sys_pwrite64 ▒ system_call_fastpath ▒ 0xeaf3 ▒ - 21,40% 0,16% glusterfsd [k] generic_file_buffered_write ▒ generic_file_buffered_write ▒ __generic_file_aio_write ▒ generic_file_aio_write ▒ do_sync_write ▒ vfs_write ▒ sys_pwrite64 ▒ system_call_fastpath ▒ 0xeaf3 ▒ + 16,73% 0,05% glusterfsd [k] sys_readv
with linux AIO enabled.. - 64,12% 0,04% glusterfsd [kernel.vmlinux] [k] system_call_fastpath ▒ - system_call_fastpath ▒ + 44,99% io_submit ▒ + 27,21% __libc_readv ▒ + 10,80% __libc_writev ▒ + 4,54% 0xf72c3 ▒ + 3,16% pthread_cond_timedwait@@GLIBC_2.3.2 ▒ + 2,04% fgetxattr ▒ + 1,90% 0x644 ▒ + 1,46% pthread_cond_signal@@GLIBC_2.3.2 ▒ + 1,15% __lll_lock_wait ▒ + 0,94% epoll_ctl ▒ + 0,82% __lll_unlock_wake ▒ + 0,80% __fxstat64 ▒ + 48,58% 0,00% glusterfsd [unknown] [.] 0000000000000000 ▒ + 29,09% 0,04% glusterfsd libaio.so.1.0.1 [.] io_submit ▒ + 28,85% 0,01% glusterfsd [kernel.vmlinux] [k] sys_io_submit ▒ + 28,60% 0,18% glusterfsd [kernel.vmlinux] [k] do_io_submit ▒ + 27,14% 0,02% glusterfsd [kernel.vmlinux] [k] generic_file_aio_write ▒ + 27,05% 0,10% glusterfsd [kernel.vmlinux] [k] __generic_file_aio_write ▒ + 26,47% 0,10% glusterfsd [kernel.vmlinux] [k] generic_file_buffered_write ▒ + 24,14% 0,12% glusterfsd [kernel.vmlinux] [k] do_readv_writev ▒ + 23,30% 0,13% glusterfsd [kernel.vmlinux] [k] do_sync_readv_writev ▒ + 17,73% 0,03% glusterfsd libc-2.17.so [.] __libc_readv ▒ + 17,45% 0,13% glusterfsd [kernel.vmlinux] [k] sys_readv ▒ + 17,26% 0,00% glusterfsd [kernel.vmlinux] [k] vfs_readv ▒ + 16,55% 0,01% glusterfsd [kernel.vmlinux] [k] sock_aio_read ▒ + 16,50% 0,02% glusterfsd [kernel.vmlinux] [k] sock_aio_read.part.7 ▒ + 16,49% 0,02% glusterfsd [kernel.vmlinux] [k] inet_recvmsg ▒ + 16,36% 0,20% glusterfsd [kernel.vmlinux] [k] tcp_recvmsg ▒ + 14,58% 0,03% glusterfsd [kernel.vmlinux] [k] shmem_write_begin ▒ + 14,23% 0,74% glusterfsd [kernel.vmlinux] [k] shmem_getpage_gfp ▒ + 13,29% 13,29% glusterfsd [kernel.vmlinux] [k] copy_user_generic_string ▒ + 9,79% 0,00% glusterfsd [unknown] [.] 0x0000000000000004 ▒ + 9,60% 0,00% glusterfsd [unknown] [.] 0x000000000001fcbc ▒ + 9,19% 0,24% glusterfsd [kernel.vmlinux] [k] intel_map_page ▒ + 9,16% 0,19% glusterfsd [kernel.vmlinux] [k] tcp_transmit_skb ▒ + 8,89% 0,11% glusterfsd [kernel.vmlinux] [k] __intel_map_single ▒ + 8,65% 0,04% glusterfsd [kernel.vmlinux] [k] ip_queue_xmit ▒ + 8,56% 0,00% glusterfsd [kernel.vmlinux] [k] ip_local_out_sk ▒ + 8,10% 0,00% glusterfsd [kernel.vmlinux] [k] intel_alloc_iova ▒ + 8,02% 0,03% glusterfsd [kernel.vmlinux] [k] common_interrupt ▒ + 8,01% 0,00% glusterfsd [kernel.vmlinux] [k] do_softirq ▒ + 8,01% 0,00% glusterfsd [kernel.vmlinux] [k] call_softirq ▒ + 8,01% 0,01% glusterfsd [kernel.vmlinux] [k] __do_softirq ▒ + 7,97% 0,00% glusterfsd [kernel.vmlinux] [k] do_IRQ ▒ + 7,85% 0,00% glusterfsd [kernel.vmlinux] [k] irq_exit ▒ + 7,82% 0,00% glusterfsd [kernel.vmlinux] [k] net_rx_action ▒ + 7,70% 0,00% glusterfsd [ib_ipoib] [k] ipoib_poll ▒ + 7,66% 0,65% glusterfsd [kernel.vmlinux] [k] alloc_iova ▒ + 7,46% 0,11% glusterfsd [ib_ipoib] [k] ipoib_cm_handle_rx_wc ▒ + 7,31% 6,60% glusterfsd libpthread-2.17.so [.] pthread_spin_lock ▒ + 7,15% 0,00% glusterfsd [unknown] [.] 0x00007fa734000c70 ▒ + 7,14% 0,04% glusterfsd [kernel.vmlinux] [k] ip_output
Where good perf counters.... where is bottleneck? [root@xintel1 ~]# gluster volume top ram write-perf bs 104857600 count 2 list-cnt 0 Brick: intel1:/mnt/ram/b1 Throughput 1754.70 MBps time 0.1195 secs Brick: intel2:/mnt/ram/b1 Throughput 1989.71 MBps time 0.1054 secs [root@xintel1 ~]# gluster volume top ram write-perf bs 104857600 count 2 list-cnt 0 Brick: intel1:/mnt/ram/b1 Throughput 1705.35 MBps time 0.1230 secs Brick: intel2:/mnt/ram/b1 Throughput 1993.51 MBps time 0.1052 secs [root@xintel1 ~]# gluster volume top ram write-perf Brick: intel1:/mnt/ram/b1 MBps Filename Time ==== ======== ==== 2383 /1.bin 2016-08-26 11:03:05.619903 2299 /1.bin 2016-08-26 10:59:47.175730 2259 /1.bin 2016-08-26 10:56:24.305102 2114 /2.bin 2016-08-26 11:11:26.216565 1795 /2.bin 2016-08-26 11:10:54.378487 Brick: intel2:/mnt/ram/b1 MBps Filename Time ==== ======== ==== 2570 /1.bin 2016-08-26 11:02:56.438934 2221 /2.bin 2016-08-26 11:11:30.51475 2184 /1.bin 2016-08-26 10:57:59.271729 2184 /1.bin 2016-08-26 10:55:42.932653 2080 /2.bin 2016-08-26 11:10:55.551528
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html
What is the performance you see on normal NFS mount without gluster involved? Could you do that test and let us no?
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days