Description of problem: ----------------------- Any kind of writes (sequential or random) are slow on Ganesha v3 and v4 mounts. This is the cumulative output from 16 iozone writers : *Sequential Writes* Ganesha,v3 : 373037 kB/sec Ganesha,v4 : 458696.5 kB/sec GlusterNFS : 1287326 kB/sec *Random Writes* Ganesha,v3 : 53497 kB/sec Ganesha,v4 : 88717.35 kB/sec GlusterNFS : 351374.5 kB/sec Server profiles will be attached to the bug soon. Version-Release number of selected component (if applicable): ------------------------------------------------------------- glusterfs-server-3.8.1-0.4.git56fcf39.el7rhgs.x86_64 nfs-ganesha-gluster-2.4-0.dev.26.el7rhgs.x86_64 pacemaker-libs-1.1.13-10.el7.x86_64 pcs-0.9.143-15.el7.x86_64 How reproducible: ----------------- Every which way I try. Steps to Reproduce: ------------------ Run Iozone Seq Writes on Ganesha mounts in a distributed multithreaded way : iozone -+m <conf file> -+h <hostname> -C -w -c -e -i 0 -+n -r 64k -s 8g -t 16 Actual results: --------------- Seq/Random Writes are slow. Expected results: ----------------- Writes should not be this slow. Additional info: ---------------- Volume Name: testvol Type: Distributed-Replicate Volume ID: 3ee2c046-939b-4915-908b-859bfcad0840 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0 Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1 Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2 Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3 Options Reconfigured: client.event-threads: 4 server.event-threads: 4 cluster.lookup-optimize: on ganesha.enable: on features.cache-invalidation: on nfs.disable: on performance.readdir-ahead: on performance.stat-prefetch: off server.allow-insecure: on nfs-ganesha: enable cluster.enable-shared-storage: enable
Large File reads are pretty slow as well compared to gluster NFS : gNFS : 2828911.5 kB/sec Ganesha v3 : 2216916.485 kB/sec Ganesha v4 : 1798245.5 kB/sec Server Profile shared over email.
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.