Bug 861947

Summary: Large writes in KVM host slow on fuse, but full speed on nfs
Product: [Community] GlusterFS Reporter: Johan Huysmans <johan.huysmans>
Component: fuseAssignee: bugs <bugs>
Status: CLOSED DEFERRED QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.3.0CC: bengland, bugs, gluster-bugs, jdarcy, rwheeler
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-12-14 19:40:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Johan Huysmans 2012-10-01 12:28:23 UTC
I'm running a KVM virtual machine with storage as a filesystem directory.
This directory is a mountpoint of a glustervolume, mounted with fuse:
(mount -t glusterfs gluster1.eng-it.newtec.eu:testvolume /var/lib/libvirt/images)

When performin a write test in the virtual machine,
The speed is way lower then the max network speed (100Mbit)
[root@vtest tmp]# dd if=/dev/zero of=/tmp/bar bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 178.552 s, 5.9 MB/s

When we perform the write test on the KVM host, not virtual machine and directly write to the mountpoint we receive speeds with a maximum of the network speed.

When we mount /var/lib/libvirt/images through nfs and not fuse:
mount -t nfs -o vers=3 gluster1.eng-it.newtec.eu:testvolume /var/lib/libvirt/images
The speed inside the virtual machine is what we expected:
[root@vtest ~]# dd if=/dev/zero of=/tmp/foo bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 93.9917 s, 11.2 MB/s

Comment 1 Jeff Darcy 2013-01-09 19:32:40 UTC
Without any kind of explicit sync or flush, this test just measures the speed of getting in and out of the in-kernel NFS client vs. getting in and out of GlusterFS via FUSE.  Is that really the kind of performance you care about?  Most people would want disk-image writes to be synchronous to disk, not just to memory.

Comment 2 Ben England 2013-04-22 15:02:01 UTC
100 Mbit network is the problem here.  Gluster client has to write data 2 times across the network, once to each server in the replication volume.  So your theoretical maximum throughput for above workload would be:

(100 Mb/s / 8 bits/byte) / 2 replicas = 6.25 MB/s, you got 95% of that.  Get a faster network.

Comment 3 Niels de Vos 2014-11-27 14:53:56 UTC
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.