Bug 1365449

Summary: [Perf] : Large file writes/reads are slow on Ganesha mounts
Product: [Community] GlusterFS Reporter: Ambarish <asoman>
Component: ganesha-nfsAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact:
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.8CC: amukherj, bugs, jthottan, kkeithle, mzywusko, ndevos, skoduri
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-07 10:42:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ambarish 2016-08-09 09:53:04 UTC
Description of problem:
-----------------------

Any kind of writes (sequential or random) are slow on Ganesha v3 and v4 mounts.

This is the cumulative output from 16 iozone writers :

*Sequential Writes*

Ganesha,v3 : 373037 kB/sec
Ganesha,v4 : 458696.5 kB/sec
GlusterNFS : 1287326 kB/sec

*Random Writes* 

Ganesha,v3 : 53497 kB/sec
Ganesha,v4 : 88717.35 kB/sec
GlusterNFS : 351374.5 kB/sec

Server profiles will be attached to the bug soon.


Version-Release number of selected component (if applicable):
-------------------------------------------------------------

glusterfs-server-3.8.1-0.4.git56fcf39.el7rhgs.x86_64
nfs-ganesha-gluster-2.4-0.dev.26.el7rhgs.x86_64
pacemaker-libs-1.1.13-10.el7.x86_64
pcs-0.9.143-15.el7.x86_64


How reproducible:
-----------------

Every which way I try.

Steps to Reproduce:
------------------

Run Iozone Seq Writes on Ganesha mounts in a distributed multithreaded way :

iozone -+m <conf file> -+h <hostname> -C -w -c -e -i 0 -+n -r 64k -s 8g -t 16


Actual results:
---------------

Seq/Random Writes  are slow.

Expected results:
-----------------

Writes should not be this slow.


Additional info:
----------------

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 3ee2c046-939b-4915-908b-859bfcad0840
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

Comment 2 Ambarish 2016-08-23 14:39:10 UTC
Large File reads are pretty slow as well compared to gluster NFS :

gNFS : 2828911.5 kB/sec
Ganesha v3 : 2216916.485 kB/sec
Ganesha v4 : 1798245.5 kB/sec

Server Profile shared over email.

Comment 3 Niels de Vos 2016-09-12 05:39:58 UTC
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html

Comment 4 Niels de Vos 2017-11-07 10:42:53 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.