Bug 1467974 - [CSS-QE] [perf] [HCI] Random read performance is much worse than random write.
Summary: [CSS-QE] [perf] [HCI] Random read performance is much worse than random write.
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: sharding
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Krutika Dhananjay
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: 1724792
TreeView+ depends on / blocked
 
Reported: 2017-07-05 17:30 UTC by Ben Turner
Modified: 2019-06-28 16:04 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-17 12:21:54 UTC
Embargoed:


Attachments (Terms of Use)

Description Ben Turner 2017-07-05 17:30:52 UTC
Description of problem:

When I run iozone random IO tests on a gluster mount I see significantly better performance than when I run the same test inside a VM.  When I run on gluster I see:

       GFS W/O Sharding      GFS W/ Sharding     20 Vms        30 Vms
Random Read	208,868	             181,614     97,025       153,241
Random Write    468,539              713,662    470,604       443,067

Above is in KB / sec.

Random reads are under 1/2 of what writes are.  Being that this was a replica 3 volume I would expect reads to be significantly faster or at least even.

Version-Release number of selected component (if applicable):

glusterfs-server-3.8.4-32.el7rhgs.x86_64.rpm

How reproducible:

Every time.

Steps to Reproduce:
1.  Run gbench on regular gluster mount
2.  Run gbench on GFS mount with sharding enabled.
3.  Run gbench on between 1 and 30 VMs across all three nodes in the POD.

Actual results:

Reads are significantly slower than writes.

Expected results:

Reads should be equal to or faster than writes.

Additional info:

Comment 2 Krutika Dhananjay 2017-07-06 00:20:01 UTC
Could you share the volume info output for both cases?

Comment 3 Ambarish 2017-07-06 06:25:00 UTC
Hi Ben/Other CSS-QE folks,

*Just curious about this one*

If I am reading the data correctly,Random reads are 50% of what random writes show,with or without sharding (sharding's not really the culprit here,unless I am missing something)

On standalone gluster on a simple iozone Random R/W test that reads and writes 2G from an 8G file,this is what I see:

Random  Read : 333575 kB/sec
Random Write : 525132 kB/sec

Looking at your rpm info,I think you are using RHEL 7 machines too and "throughput-performance" tuned profile.


So here's the thing.I would expect random reads to be significantly faster than random writes too.But it depends on how you have tuned your nodes :)

I think why you are seing this _may_ be coz of the tuned profile?

The only difference I can think of is the tuned profiles.rhs-high-throughput on RHEL 6 gives us a significantly good random read/write perf.On RHEL 6,if I run the same test random reads are almost twice as good as random writes(if old 3.2 data still holds good).




------------------------------------------------------
 from /etc/tune-profiles/rhs-high-throughput/ktune.sh
------------------------------------------------------


tuned_ra=65536
set_cpu_governor performance



---------------------------------------------------------------
Excerpt from /etc/tune-profiles/throughput-performance/ktune.sh
---------------------------------------------------------------
set_cpu_governor performance
and Default RA,which I think is 128K.

Can you maybe try doing manually what rhs-high-throughput does?May be bump up RA?I think tht may help.

I can try that on standalone gluster and maybe we can write out a kBASE/enhance the perf tuning chapter of our respective admin guides together,to make it more RHEL-7 friendly,_if this is indeed the case_.

Comment 4 Ambarish 2017-07-06 07:18:41 UTC
Oh I guess RHEV uses virtual host by default :

<snip>

start() {
	set_cpu_governor performance
	set_transparent_hugepages always
	disable_disk_barriers
	multiply_disk_readahead 4


</snip>

Comment 5 Ben Turner 2017-07-10 17:00:31 UTC
@Ambarish - I have the proper tuned profiles / volume configurations provided for HCI environments.  And yes, this doesn't have anything to do with sharding and/or VMs.  Even with the numbers you mentioned reads are almost 1/2 of writes, and based on how Gluster is architected I don't think this should be the case.  With VM workloads random read performance is becoming more and more important and the fact that gluster is performing so poor WRT writes is affecting the user experience for containers as well as VMs.

-b

Comment 6 Ben Turner 2017-07-10 17:00:51 UTC
@Ambarish - I have the proper tuned profiles / volume configurations provided for HCI environments.  And yes, this doesn't have anything to do with sharding and/or VMs.  Even with the numbers you mentioned reads are almost 1/2 of writes, and based on how Gluster is architected I don't think this should be the case.  With VM workloads random read performance is becoming more and more important and the fact that gluster is performing so poor WRT writes is affecting the user experience for containers as well as VMs.

-b

Comment 7 Krutika Dhananjay 2017-07-11 10:10:09 UTC
(In reply to Krutika Dhananjay from comment #2)
> Could you share the volume info output for both cases?

In addition to volume-info, I'm not clear as to why the bug is assigned to shard component given that you see the same pattern even without sharding.

Additionally, could you share the volume profile output for these runs as a first step?

-Krutika

Comment 8 Sahina Bose 2017-10-16 11:30:00 UTC
Do we need to revisit the tuned profile for HC?

Comment 11 Sahina Bose 2018-10-17 12:21:54 UTC
As there has been no update on the information requested on this bug, and no headway has been made, deferring this bug.
Please reopen if the requested information is available and the bug is still seen


Note You need to log in before you can comment on or make changes to this bug.