Bug 861947 - Large writes in KVM host slow on fuse, but full speed on nfs
Large writes in KVM host slow on fuse, but full speed on nfs
Status: CLOSED DEFERRED
Product: GlusterFS
Classification: Community
Component: fuse (Show other bugs)
3.3.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: bugs@gluster.org
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-01 08:28 EDT by Johan Huysmans
Modified: 2014-12-14 14:40 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-12-14 14:40:29 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Johan Huysmans 2012-10-01 08:28:23 EDT
I'm running a KVM virtual machine with storage as a filesystem directory.
This directory is a mountpoint of a glustervolume, mounted with fuse:
(mount -t glusterfs gluster1.eng-it.newtec.eu:testvolume /var/lib/libvirt/images)

When performin a write test in the virtual machine,
The speed is way lower then the max network speed (100Mbit)
[root@vtest tmp]# dd if=/dev/zero of=/tmp/bar bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 178.552 s, 5.9 MB/s

When we perform the write test on the KVM host, not virtual machine and directly write to the mountpoint we receive speeds with a maximum of the network speed.

When we mount /var/lib/libvirt/images through nfs and not fuse:
mount -t nfs -o vers=3 gluster1.eng-it.newtec.eu:testvolume /var/lib/libvirt/images
The speed inside the virtual machine is what we expected:
[root@vtest ~]# dd if=/dev/zero of=/tmp/foo bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 93.9917 s, 11.2 MB/s
Comment 1 Jeff Darcy 2013-01-09 14:32:40 EST
Without any kind of explicit sync or flush, this test just measures the speed of getting in and out of the in-kernel NFS client vs. getting in and out of GlusterFS via FUSE.  Is that really the kind of performance you care about?  Most people would want disk-image writes to be synchronous to disk, not just to memory.
Comment 2 Ben England 2013-04-22 11:02:01 EDT
100 Mbit network is the problem here.  Gluster client has to write data 2 times across the network, once to each server in the replication volume.  So your theoretical maximum throughput for above workload would be:

(100 Mb/s / 8 bits/byte) / 2 replicas = 6.25 MB/s, you got 95% of that.  Get a faster network.
Comment 3 Niels de Vos 2014-11-27 09:53:56 EST
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.

Note You need to log in before you can comment on or make changes to this bug.