Bug 1630798 - Add performance options to virt profile
Summary: Add performance options to virt profile
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact:
URL:
Whiteboard:
Depends On: 1619627
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-19 10:07 UTC by Krutika Dhananjay
Modified: 2019-03-25 16:30 UTC (History)
11 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1619627
Environment:
Last Closed: 2019-03-25 16:30:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Krutika Dhananjay 2018-09-19 10:07:14 UTC
+++ This bug was initially created as a clone of Bug #1619627 +++

Description of problem:
------------------------
The option performance.client-io-threads, which has been unavailable as a tuning option for replicated volumes in recent releases, is now available. Gluster for vm storage use-case can benefit by adding it to the group virt profile, along with a few others. The following is the list of suggested additions to group virt:

performance.client-io-threads on
client.event-threads 4
server.event-threads 4

In addition, "cluster.choose-local false" is beneficial, and is already covered.

In most random I/O tests on flash, the fuse thread is currently seen to be the bottleneck. client-io-threads, along with the other options, helps push IOPS far beyond what RHHI can currently provide.

--- Additional comment from Sahina Bose on 2018-09-19 02:59:52 EDT ---

Proposing this as per the latest perf analysis in Bug 1616270.

Comment 1 Worker Ant 2018-09-19 10:25:16 UTC
REVIEW: https://review.gluster.org/21222 (extras: Add new options to group \"virt\") posted (#1) for review on master by Krutika Dhananjay

Comment 2 Atin Mukherjee 2018-09-19 15:48:40 UTC
(In reply to Krutika Dhananjay from comment #0)
> +++ This bug was initially created as a clone of Bug #1619627 +++
> 
> Description of problem:
> ------------------------
> The option performance.client-io-threads, which has been unavailable as a
> tuning option for replicated volumes in recent releases, is now available.
> Gluster for vm storage use-case can benefit by adding it to the group virt
> profile, along with a few others. The following is the list of suggested
> additions to group virt:
> 
> performance.client-io-threads on
> client.event-threads 4
> server.event-threads 4
> 
> In addition, "cluster.choose-local false" is beneficial, and is already
> covered.
> 
> In most random I/O tests on flash, the fuse thread is currently seen to be
> the bottleneck. client-io-threads, along with the other options, helps push
> IOPS far beyond what RHHI can currently provide.
> 
> --- Additional comment from Sahina Bose on 2018-09-19 02:59:52 EDT ---
> 
> Proposing this as per the latest perf analysis in Bug 1616270.

A summary of the % of performance gain by the tuning change would be ideal here.

Comment 3 Krutika Dhananjay 2018-09-20 05:11:37 UTC
(In reply to Atin Mukherjee from comment #2)
> (In reply to Krutika Dhananjay from comment #0)
> > +++ This bug was initially created as a clone of Bug #1619627 +++
> > 
> > Description of problem:
> > ------------------------
> > The option performance.client-io-threads, which has been unavailable as a
> > tuning option for replicated volumes in recent releases, is now available.
> > Gluster for vm storage use-case can benefit by adding it to the group virt
> > profile, along with a few others. The following is the list of suggested
> > additions to group virt:
> > 
> > performance.client-io-threads on
> > client.event-threads 4
> > server.event-threads 4
> > 
> > In addition, "cluster.choose-local false" is beneficial, and is already
> > covered.
> > 
> > In most random I/O tests on flash, the fuse thread is currently seen to be
> > the bottleneck. client-io-threads, along with the other options, helps push
> > IOPS far beyond what RHHI can currently provide.
> > 
> > --- Additional comment from Sahina Bose on 2018-09-19 02:59:52 EDT ---
> > 
> > Proposing this as per the latest perf analysis in Bug 1616270.
> 
> A summary of the % of performance gain by the tuning change would be ideal
> here.

Copy-pasting numbers from Nikhil's test:
| CONFIGURATION                                   | RANDOM WRITE IOPS |
-----------------------------------------------------------------------
| ovirt-gluster-fuse_aio=native                   |      1680         |
| ovirt-gluster-fuse_aio=native_tuned             |      22165        |
=======================================================================
| ovirt-gluster-fuse_aio=native_128GBshard        |      7160         |
| ovirt-gluster-fuse_aio=native_128GBshard_tuned  |      14557        |
=======================================================================
| ovirt-gluster-Libgfapi                          |      15593        |
| ovirt-gluster-libgfapi-tuned                    |      21220        |
=======================================================================
In all of the above, "tuned" = client-io-threads=on + client.event-threads=4 + server.event-threads=4

Even rows reflect the numbers with these options applied.

-Krutika

Comment 4 Worker Ant 2018-09-21 04:16:40 UTC
COMMIT: https://review.gluster.org/21222 committed in master by "Atin Mukherjee" <amukherj> with a commit message- extras: Add new options to group "virt"

In some of the recent performance tests on gluster-as-vm-image-store
use-case, it has been observed that sometimes the lone fuse thread can
hit near-100% CPU utilization and become a performance bottleneck.
Enabling client-io-threads (in addition to bumping up epoll threads on
server and client side) has shown to be helpful in getting around this
bottleneck and pushing more IOPs.

Change-Id: I231db309de0e37c79cd44f5666da4cd776fefa04
fixes: bz#1630798
Signed-off-by: Krutika Dhananjay <kdhananj>

Comment 5 Shyamsundar 2019-03-25 16:30:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.