Bug 1676429 - distribute: Perf regression in mkdir path
Summary: distribute: Perf regression in mkdir path
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: glusterfs-6.0 1676430 1732875
TreeView+ depends on / blocked
 
Reported: 2019-02-12 09:31 UTC by Susant Kumar Palai
Modified: 2019-07-24 15:04 UTC (History)
4 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1676430 (view as bug list)
Environment:
Last Closed: 2019-03-08 14:08:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 22304 0 None Merged io-threads: Prioritize fops with NO_ROOT_SQUASH pid 2019-03-08 14:08:26 UTC

Description Susant Kumar Palai 2019-02-12 09:31:25 UTC
Description of problem:
There seems to be perf regression of around 30% in mkdir path with patch : https://review.gluster.org/#/c/glusterfs/+/21062/.

Here is the result from gbench which runs smallfile tool internally.
Without patch: 3187.402238 2544.658604  2400.662029 (mkdir per second)
With patch: 2439.311086  1654.222631 1634.522184 (mkdir per second)

This bug is created to address the revert of the above commit.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
Run gbench

Comment 1 Susant Kumar Palai 2019-02-20 10:15:39 UTC
Update:
After taking statedumps, io-threads xlator showed differences in latency. And here is the responsible code path.

<<<<
int
iot_schedule (call_frame_t *frame, xlator_t *this, call_stub_t *stub)
{
        int             ret = -1;
        iot_pri_t       pri = IOT_PRI_MAX - 1;
        iot_conf_t      *conf = this->private;

        if ((frame->root->pid < GF_CLIENT_PID_MAX) && conf->least_priority) {
                pri = IOT_PRI_LEAST;
                goto out;
        }
>>>>

It seems requests with -ve pid gets the least priority.


After testing with performance.enable-least-priority to be off, the results are normalized now. Here is the summary.

Numbers are in files/sec

Post with performance.enable-least-priority on:   5448.965051804044, 5382.812519425897, 5358.221152245441,

Post with performance.enable-least-priority off:  6589.996990998271, 6458.350431426266, 6568.009725869085

Pre:                                              6387.711992865287,  6412.12706152037, 6570.547263693283



Will send a patch to prioritize ops with no-root-squash pid.

Susant

Comment 2 Worker Ant 2019-03-06 03:18:32 UTC
REVIEW: https://review.gluster.org/22304 (io-threads: Prioritize fops with NO_ROOT_SQUASH pid) posted (#2) for review on release-6 by Susant Palai

Comment 3 Worker Ant 2019-03-08 14:08:27 UTC
REVIEW: https://review.gluster.org/22304 (io-threads: Prioritize fops with NO_ROOT_SQUASH pid) merged (#2) on release-6 by Susant Palai

Comment 4 Shyamsundar 2019-03-25 16:33:15 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.