Bug 1326143 - [GSS] Read performance not consistent [NEEDINFO]
Summary: [GSS] Read performance not consistent
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Anoop
URL:
Whiteboard:
: 1326144 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-11 23:00 UTC by Oonkwee Lim_
Modified: 2019-11-14 07:46 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-07 07:09:44 UTC
Target Upstream Version:
mpillai: needinfo? (jscalf)


Attachments (Terms of Use)

Description Oonkwee Lim_ 2016-04-11 23:00:22 UTC
Description of problem:
From the CU:
I have a dedicated non prod gluster storage environment that I am doing benchmarking against using iozone with gluster-fuse mounts for 12 distributed clients. Write performance is pretty consistent, however read performance varies.

The file is 4G in size, however the record size written is small i.e. 2k.
As the graph shows, any record size read/written to that file is dramatically slower for record sizes under 64k.

They are doing iozone tests with a 4Gb file from record sizes 2 thru 2048.
Their use case is around a record size of 24 - 54. 

They don't have a way to force the app to use any specific record length.
They need to understand why the drop off or cliff near the 64k mark. 

They observe that other vendor systems that gluster is running on that do not exhibit this issue.

So they want to know what is specific to this configuration where this may be happening.

Also while this is happening, do we have any general information/tuning on how to prove read throughput?

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 6.7 (Santiago)
Red Hat Gluster Storage Server 3.1 Update 1

How reproducible:
Easily with their iozone tests

Steps to Reproduce:
1. Run the iozone tests
2.
3.

Actual results:
Performance drop off below the 64k sector size I/O.

Expected results:
No drop off

Additional info:
The sosreports and the I/O graphs can be accessed in collab-shell.usersys.redhat.com:/cases/01609171 and via the browser @ http://collab-shell.usersys.redhat.com/01609171/

If need I can attached them to the BZ

Comment 2 Oonkwee Lim_ 2016-04-13 15:34:45 UTC
*** Bug 1326144 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.