Bug 1330235 - [Perf] : ~30% regression on small file delete-renamed and rmdir FOPs
Summary: [Perf] : ~30% regression on small file delete-renamed and rmdir FOPs
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: Ambarish
URL:
Whiteboard: dht-directory-consistency
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-25 17:16 UTC by Ambarish
Modified: 2016-08-23 06:16 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 06:16:40 UTC
Embargoed:


Attachments (Terms of Use)

Description Ambarish 2016-04-25 17:16:11 UTC
Description of problem:
----------------------

Looks like we have regressed with small file rmdirs and delete-renamed with this build by almost 30%.

Version-Release number of selected component (if applicable):
------------------------------------------------------------

glusterfs-3.7.9-2.el6rhs.x86_64


How reproducible:
----------------

2/2

Steps to Reproduce:
-------------------

1. Mount the dist rep volume via FUSE/gNFS/SMB as the issue is seen across all the three access mechanisms
 
2. Run small file workload (specifically rmdir,delete-renamed)

3. Check for regression from 3.1.2

Actual results:
---------------

The calculated IOPS with this build is beyond 10% from the baseline

Expected results:
-----------------

The expected IOPS value should be within +-10% of the basleine.


Additional info:
----------------

OS : RHEL 6.8

****************
VOL CONFIGURTION
****************

[root@gqas001 ~]# gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: b59cdf1b-ee19-4558-84ad-6e0e88bf4289
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
server.allow-insecure: on
performance.stat-prefetch: off
performance.readdir-ahead: on
[root@gqas001 ~]# 

I am hitting this issue with and without "Small File Performance Enhancements".

Comment 2 Ambarish 2016-04-25 17:22:58 UTC
**********
With 3.1.2
**********

delete-renamed : 7229.080000 files/sec
rmdir : 2761.420000 files/sec
These numbers are from a "non tuned" volume

*******************
With 3.1.3(3.7.9-2) 
*******************

delete-renamed : 4728.684409 files/sec
rmdir : 1946.532115 files/sec


Small file performance enhancements(lookup optimize enabled and setting client and server event threads to 4 ) doesn't help either with 3.1.3: 

delete-renamed : 3897.576407 files/sec   
rmdir : 2718.358495 files/sec

Comment 3 Ambarish 2016-04-25 17:25:45 UTC
**************
EXACT WORKLOAD
**************

Order of operations :

(Create -> Read -> Append -> Rename -> Delete-Renamed -> Mkdir -> Rmdir -> Cleanup)*3

CLI example:

python /small-files/smallfile/smallfile_cli.py --operation rmdir --threads 8  --file-size 64 --files 10000 --top /gluster-mount --pause 1000 --host-set "`echo $CLIENT | tr ' ' ','`"


Note You need to log in before you can comment on or make changes to this bug.