Bug 1240782 - Quota: Larger than normal perf hit with quota enabled.
Summary: Quota: Larger than normal perf hit with quota enabled.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.1
Hardware: All
OS: Linux
urgent
high
Target Milestone: ---
: RHGS 3.1.0
Assignee: Vijaikumar Mallikarjuna
QA Contact: Ben Turner
URL:
Whiteboard:
Depends On:
Blocks: 1202842
TreeView+ depends on / blocked
 
Reported: 2015-07-07 17:54 UTC by Ben Turner
Modified: 2016-09-17 12:40 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.1-8
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 05:10:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Ben Turner 2015-07-07 17:54:22 UTC
Description of problem:

In previous builds of glusterfs I saw a minimal perf hit with quota enabled, on the 3.7-* builds I am seeing a 30-50% hit.

Version-Release number of selected component (if applicable):

glusterfs-3.7.1-7.el6rhs.x86_64

How reproducible:

Every run.

Steps to Reproduce:
1.  Enable quota on volume
2.  Set a 1 TB quota on /
3.  Run perf regression tests

Actual results:

30-50% perf hit

Expected results:

Minimal perf hit

Additional info:

Comment 2 Ben Turner 2015-07-07 17:56:58 UTC
Here is a run without quota enabled:

On gqac006.sbu.lab.eng.bos.redhat.com  about to run - ['iozone', '-+m', '/mnt/tests/rhs-tests/beaker/rhs/auto-tests/setup/rhs-setup-dev-bturner/clients.ioz', '-+h', 'gqac006.sbu.lab.eng.bos.redhat.com', '-C', '-w', '-c', '-e', '-i', '0', '-+n', '-r', '64k', '-s', '8g', '-t', '16']
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.408 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
	             Ben England.

	Run began: Tue Jul  7 13:29:02 2015

	Network distribution mode enabled.
	Hostname = gqac006.sbu.lab.eng.bos.redhat.com
	Setting no_unlink
	Include close in write timing
	Include fsync in write timing
	No retest option selected
	Record Size 64 KB
	File size set to 8388608 KB
	Command line used: iozone -+m /mnt/tests/rhs-tests/beaker/rhs/auto-tests/setup/rhs-setup-dev-bturner/clients.ioz -+h gqac006.sbu.lab.eng.bos.redhat.com -C -w -c -e -i 0 -+n -r 64k -s 8g -t 16
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 16 processes
	Each process writes a 8388608 Kbyte file in 64 Kbyte records

	Test running:
	Children see throughput for 16 initial writers 	= 1595849.37 KB/sec
	Min throughput per process 			=   79769.95 KB/sec 
	Max throughput per process 			=  120293.01 KB/sec
	Avg throughput per process 			=   99740.59 KB/sec
	Min xfer 					= 5263808.00 KB
	Child[0] xfer count = 8236544.00 KB, Throughput =  119789.32 KB/sec
	Child[1] xfer count = 6366656.00 KB, Throughput =   89465.08 KB/sec
	Child[2] xfer count = 6425728.00 KB, Throughput =   89984.80 KB/sec
	Child[3] xfer count = 8233408.00 KB, Throughput =  118983.21 KB/sec
	Child[4] xfer count = 8372352.00 KB, Throughput =  120268.19 KB/sec
	Child[5] xfer count = 6183232.00 KB, Throughput =   88442.49 KB/sec
	Child[6] xfer count = 6189440.00 KB, Throughput =   86450.38 KB/sec
	Child[7] xfer count = 8388608.00 KB, Throughput =  120293.01 KB/sec
	Child[8] xfer count = 6738944.00 KB, Throughput =   93524.25 KB/sec
	Child[9] xfer count = 7720128.00 KB, Throughput =  112923.06 KB/sec
	Child[10] xfer count = 7713024.00 KB, Throughput =  114842.81 KB/sec
	Child[11] xfer count = 7759168.00 KB, Throughput =  114911.38 KB/sec
	Child[12] xfer count = 5296384.00 KB, Throughput =   82397.27 KB/sec
	Child[13] xfer count = 5304832.00 KB, Throughput =   81137.98 KB/sec
	Child[14] xfer count = 5313664.00 KB, Throughput =   82666.19 KB/sec
	Child[15] xfer count = 5263808.00 KB, Throughput =   79769.95 KB/sec

Here is one with quota:

On gqac006.sbu.lab.eng.bos.redhat.com  about to run - ['iozone', '-+m', '/mnt/tests/rhs-tests/beaker/rhs/auto-tests/setup/rhs-setup-dev-bturner/clients.ioz', '-+h', 'gqac006.sbu.lab.eng.bos.redhat.com', '-C', '-w', '-c', '-e', '-i', '0', '-+n', '-r', '64k', '-s', '8g', '-t', '16']
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.408 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
	             Ben England.

	Run began: Tue Jul  7 13:46:53 2015

	Network distribution mode enabled.
	Hostname = gqac006.sbu.lab.eng.bos.redhat.com
	Setting no_unlink
	Include close in write timing
	Include fsync in write timing
	No retest option selected
	Record Size 64 KB
	File size set to 8388608 KB
	Command line used: iozone -+m /mnt/tests/rhs-tests/beaker/rhs/auto-tests/setup/rhs-setup-dev-bturner/clients.ioz -+h gqac006.sbu.lab.eng.bos.redhat.com -C -w -c -e -i 0 -+n -r 64k -s 8g -t 16
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 16 processes
	Each process writes a 8388608 Kbyte file in 64 Kbyte records

	Test running:
	Children see throughput for 16 initial writers 	=  919309.62 KB/sec
	Min throughput per process 			=   51923.00 KB/sec 
	Max throughput per process 			=   64332.68 KB/sec
	Avg throughput per process 			=   57456.85 KB/sec
	Min xfer 					= 6817600.00 KB
	Child[0] xfer count = 8367168.00 KB, Throughput =   64087.78 KB/sec
	CFhild[1] xfer count = 6950016.00 KB, Throughput =   52869.46 KB/sec
	Child[2] xfer count = 6944896.00 KB, Throughput =   53197.22 KB/sec
	Child[3] xfer count = 8368448.00 KB, Throughput =   63957.26 KB/sec
	Child[4] xfer count = 8329664.00 KB, Throughput =   63664.64 KB/sec
	Child[5] xfer count = 6936640.00 KB, Throughput =   52829.02 KB/sec
	Child[6] xfer count = 6957248.00 KB, Throughput =   53067.10 KB/sec
	Child[7] xfer count = 8388608.00 KB, Throughput =   64332.68 KB/sec
	Child[8] xfer count = 6947008.00 KB, Throughput =   52804.14 KB/sec
	Child[9] xfer count = 8242688.00 KB, Throughput =   63453.32 KB/sec
	Child[10] xfer count = 8270784.00 KB, Throughput =   63492.46 KB/sec
	Child[11] xfer count = 8222144.00 KB, Throughput =   63159.34 KB/sec
	Child[12] xfer count = 6834240.00 KB, Throughput =   52311.53 KB/sec
	Child[13] xfer count = 6817600.00 KB, Throughput =   51923.00 KB/sec
	Child[14] xfer count = 6839744.00 KB, Throughput =   52198.70 KB/sec
	Child[15] xfer count = 6822592.00 KB, Throughput =   51961.96 KB/sec

As you can see without quota enabled we get ~1.5 GB / sec, with quota we were getting under 1 GB / sec.

Comment 6 Ben Turner 2015-07-09 04:14:45 UTC
I am running through seq write tests manually right now, got through 5 iterations and everything is looking good:

	Children see throughput for 16 initial writers 	= 1438041.80 KB/sec
	Children see throughput for 16 initial writers 	= 1472242.83 KB/sec
	Children see throughput for 16 initial writers 	= 1432595.64 KB/sec
	Children see throughput for 16 initial writers 	= 1388314.14 KB/sec
	Children see throughput for 16 initial writers 	= 1466760.23 KB/sec

Thats an average of ~1,439 MB / sec, without quota enabled we are seeing 1,500-1,600 MB / sec.  Thats only a 3-7% percent hit depending on which baseline you use and thats closer to what we were seeing with quota enabled in previous releases.  I'll run the full regression suite over night and see what we get but seq writes were the biggest problem and they are looking MUCH better on the .8 build.

Comment 9 Ben Turner 2015-07-14 22:04:45 UTC
Verified on:

Gluster: glusterfs-3.7.1-9.el6rhs.x86_64

Comment 10 errata-xmlrpc 2015-07-29 05:10:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.