Bug 1326143

Summary: [GSS] Read performance not consistent
Product: Red Hat Gluster Storage Reporter: Oonkwee Lim_ <olim>
Component: coreAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED NOTABUG QA Contact: Anoop <annair>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: bkunal, jscalf, mchangir, mpillai, olim, rgowdapp, rhs-bugs
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: ---Flags: mpillai: needinfo? (jscalf)
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-07 07:09:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Oonkwee Lim_ 2016-04-11 23:00:22 UTC
Description of problem:
From the CU:
I have a dedicated non prod gluster storage environment that I am doing benchmarking against using iozone with gluster-fuse mounts for 12 distributed clients. Write performance is pretty consistent, however read performance varies.

The file is 4G in size, however the record size written is small i.e. 2k.
As the graph shows, any record size read/written to that file is dramatically slower for record sizes under 64k.

They are doing iozone tests with a 4Gb file from record sizes 2 thru 2048.
Their use case is around a record size of 24 - 54. 

They don't have a way to force the app to use any specific record length.
They need to understand why the drop off or cliff near the 64k mark. 

They observe that other vendor systems that gluster is running on that do not exhibit this issue.

So they want to know what is specific to this configuration where this may be happening.

Also while this is happening, do we have any general information/tuning on how to prove read throughput?

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 6.7 (Santiago)
Red Hat Gluster Storage Server 3.1 Update 1

How reproducible:
Easily with their iozone tests

Steps to Reproduce:
1. Run the iozone tests
2.
3.

Actual results:
Performance drop off below the 64k sector size I/O.

Expected results:
No drop off

Additional info:
The sosreports and the I/O graphs can be accessed in collab-shell.usersys.redhat.com:/cases/01609171 and via the browser @ http://collab-shell.usersys.redhat.com/01609171/

If need I can attached them to the BZ

Comment 2 Oonkwee Lim_ 2016-04-13 15:34:45 UTC
*** Bug 1326144 has been marked as a duplicate of this bug. ***