I'm running a KVM virtual machine with storage as a filesystem directory. This directory is a mountpoint of a glustervolume, mounted with fuse: (mount -t glusterfs gluster1.eng-it.newtec.eu:testvolume /var/lib/libvirt/images) When performin a write test in the virtual machine, The speed is way lower then the max network speed (100Mbit) [root@vtest tmp]# dd if=/dev/zero of=/tmp/bar bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 178.552 s, 5.9 MB/s When we perform the write test on the KVM host, not virtual machine and directly write to the mountpoint we receive speeds with a maximum of the network speed. When we mount /var/lib/libvirt/images through nfs and not fuse: mount -t nfs -o vers=3 gluster1.eng-it.newtec.eu:testvolume /var/lib/libvirt/images The speed inside the virtual machine is what we expected: [root@vtest ~]# dd if=/dev/zero of=/tmp/foo bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 93.9917 s, 11.2 MB/s
Without any kind of explicit sync or flush, this test just measures the speed of getting in and out of the in-kernel NFS client vs. getting in and out of GlusterFS via FUSE. Is that really the kind of performance you care about? Most people would want disk-image writes to be synchronous to disk, not just to memory.
100 Mbit network is the problem here. Gluster client has to write data 2 times across the network, once to each server in the replication volume. So your theoretical maximum throughput for above workload would be: (100 Mb/s / 8 bits/byte) / 2 replicas = 6.25 MB/s, you got 95% of that. Get a faster network.
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug. If there has been no update before 9 December 2014, this bug will get automatocally closed.