Bug 1365449 - [Perf] : Large file writes/reads are slow on Ganesha mounts
Summary: [Perf] : Large file writes/reads are slow on Ganesha mounts
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: ganesha-nfs
Version: 3.8
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-09 09:53 UTC by Ambarish
Modified: 2017-11-07 10:42 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-07 10:42:53 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ambarish 2016-08-09 09:53:04 UTC
Description of problem:
-----------------------

Any kind of writes (sequential or random) are slow on Ganesha v3 and v4 mounts.

This is the cumulative output from 16 iozone writers :

*Sequential Writes*

Ganesha,v3 : 373037 kB/sec
Ganesha,v4 : 458696.5 kB/sec
GlusterNFS : 1287326 kB/sec

*Random Writes* 

Ganesha,v3 : 53497 kB/sec
Ganesha,v4 : 88717.35 kB/sec
GlusterNFS : 351374.5 kB/sec

Server profiles will be attached to the bug soon.


Version-Release number of selected component (if applicable):
-------------------------------------------------------------

glusterfs-server-3.8.1-0.4.git56fcf39.el7rhgs.x86_64
nfs-ganesha-gluster-2.4-0.dev.26.el7rhgs.x86_64
pacemaker-libs-1.1.13-10.el7.x86_64
pcs-0.9.143-15.el7.x86_64


How reproducible:
-----------------

Every which way I try.

Steps to Reproduce:
------------------

Run Iozone Seq Writes on Ganesha mounts in a distributed multithreaded way :

iozone -+m <conf file> -+h <hostname> -C -w -c -e -i 0 -+n -r 64k -s 8g -t 16


Actual results:
---------------

Seq/Random Writes  are slow.

Expected results:
-----------------

Writes should not be this slow.


Additional info:
----------------

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 3ee2c046-939b-4915-908b-859bfcad0840
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

Comment 2 Ambarish 2016-08-23 14:39:10 UTC
Large File reads are pretty slow as well compared to gluster NFS :

gNFS : 2828911.5 kB/sec
Ganesha v3 : 2216916.485 kB/sec
Ganesha v4 : 1798245.5 kB/sec

Server Profile shared over email.

Comment 3 Niels de Vos 2016-09-12 05:39:58 UTC
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html

Comment 4 Niels de Vos 2017-11-07 10:42:53 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.