Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1380276 - Poor write performance with arbiter volume after enabling sharding on arbiter volume
Poor write performance with arbiter volume after enabling sharding on arbiter...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: arbiter (Show other bugs)
3.2
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.2.0
Assigned To: Ravishankar N
SATHEESARAN
:
Depends On: 1375125
Blocks: 1351528
  Show dependency treegraph
 
Reported: 2016-09-29 03:48 EDT by SATHEESARAN
Modified: 2017-03-23 02:06 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-3
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
virt-gluster integration
Last Closed: 2017-03-23 02:06:08 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 05:18:45 EDT

  None (edit)
Description SATHEESARAN 2016-09-29 03:48:12 EDT
Description of problem:
-----------------------
I see poor write performance with arbiter volume post enabling sharding on the volume

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHEL 7.2
RHGS 3.2.0 ( interim build - glusterfs-3.8.4-1.el7rhgs )
qemu-kvm-1.5.3-105.el7_2.7.x86_64
qemu-img-1.5.3-105.el7_2.7.x86_64

How reproducible:
-----------------
Always, consistent

Steps to Reproduce:
-------------------
1. Create a arbiter volume and start it
2. Fuse mount the volume on RHEL 7.2 node
2. Do a write ( using dd ), check for the throughput
# dd if=/dev/urandom of=/mnt/arbvol/file bs=128k count=100
3. Enable sharding on arbiter volume, with shard-block-size set to 512MB
4. Repeat the write test ( with the same params to dd )

Actual results:
---------------
Write performance was tremendously reduced ( reduced to few KBps, from 10MBps )

Expected results:
-----------------
Enabling sharding on the arbiter volume should not deteriorate the write performance

Additional info:
----------------
Without sharding VM installation takes 4-5 minutes,
VM installation take 16-17mins after enabling sharding
Comment 1 SATHEESARAN 2016-09-29 03:49:32 EDT
profile information before enabling sharding and after enabling sharding on arbiter volumes.

Interval 3 ( without sharding enabled ) & Interval 4 ( with sharding enabled )

Interval 3 Stats:
   Block Size:                  1b+ 
 No. of Reads:                    0 
No. of Writes:                  500 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us              5      FORGET
      0.00       0.00 us       0.00 us       0.00 us              5     RELEASE
      0.00       0.00 us       0.00 us       0.00 us              1  RELEASEDIR
      0.24     295.00 us     295.00 us     295.00 us              1     OPENDIR
      0.95     234.00 us     150.00 us     359.00 us              5      UNLINK
      1.83     225.60 us     132.00 us     718.00 us             10       FLUSH
      2.22     273.00 us      52.00 us     604.00 us             10    FINODELK
      2.26     139.10 us      60.00 us     249.00 us             20     ENTRYLK
      2.28     561.80 us     467.00 us     626.00 us              5      CREATE
      3.11     382.50 us     343.00 us     457.00 us             10    FXATTROP
     11.12     297.35 us     135.00 us     852.00 us             46      LOOKUP
     75.99     186.98 us      71.00 us     368.00 us            500       WRITE
 
    Duration: 21 seconds
   Data Read: 0 bytes
Data Written: 500 bytes


Interval 4 Stats:
   Block Size:                  1b+ 
 No. of Reads:                    0 
No. of Writes:                  500 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us              5      FORGET
      0.00       0.00 us       0.00 us       0.00 us              6     RELEASE
      0.00       0.00 us       0.00 us       0.00 us             10  RELEASEDIR
      0.03     148.00 us     148.00 us     148.00 us              1    GETXATTR
      0.04     222.00 us     222.00 us     222.00 us              1        OPEN
      0.07     186.50 us     143.00 us     230.00 us              2     INODELK
      0.30     336.80 us     285.00 us     384.00 us              5      UNLINK
      0.31     174.30 us      73.00 us     319.00 us             10     OPENDIR
      0.40     204.09 us      43.00 us     467.00 us             11       FLUSH
      0.58     657.80 us     531.00 us    1075.00 us              5      CREATE
      0.68     174.73 us      53.00 us     469.00 us             22     ENTRYLK
      0.74    2088.50 us    1737.00 us    2440.00 us              2       FSYNC
      5.15     319.14 us     125.00 us     963.00 us             91      LOOKUP
     13.94     157.70 us      38.00 us     641.00 us            499       WRITE
     29.12     240.62 us      44.00 us     997.00 us            683    FXATTROP
     48.65     147.36 us      24.00 us     784.00 us           1863    FINODELK
 
    Duration: 196 seconds
   Data Read: 0 bytes
Data Written: 500 bytes
Comment 2 SATHEESARAN 2016-09-29 03:51:03 EDT
Poor write performance from the client side

Creation of 5 files before enabling sharding
---------------------------------------------
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 1.24413 s, 10.5 MB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 1.24385 s, 10.5 MB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 1.2441 s, 10.5 MB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 1.24745 s, 10.5 MB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 1.24473 s, 10.5 MB/s


Creation of 5 files after enabling sharding
---------------------------------------------
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 21.592 s, 607 kB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 21.9961 s, 596 kB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 21.984 s, 596 kB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 21.9758 s, 596 kB/s
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 18.0099 s, 728 kB/s
Comment 5 Ravishankar N 2016-10-16 06:18:30 EDT
Downstream patch: https://code.engineering.redhat.com/gerrit/87216
Comment 8 SATHEESARAN 2016-11-07 22:03:27 EST
Tested with RHGS 3.2.0 interim build ( glusterfs-3.8.4-3.el7rhgs ) and the performance is not bad as earlier. I could hit 10.7MBps on the fuse mounted arbiter volume (1x(2+1)), compared to few KBps (700 ) earlier, after enabling sharding

[arbvol]# dd if=/dev/urandom of=file4 bs=128k count=100
100+0 records in
100+0 records out
13107200 bytes (13 MB) copied, 1.29223 s, 10.1 MB/s
Comment 10 errata-xmlrpc 2017-03-23 02:06:08 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html

Note You need to log in before you can comment on or make changes to this bug.