Bug 1670031

Summary: performance regression seen with smallfile workload tests
Product: [Community] GlusterFS Reporter: Amar Tumballi <atumball>
Component: coreAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: urgent Docs Contact:
Priority: urgent    
Version: mainlineCC: amukherj, atumball, bugs, guillaume.pavese, jahernan, nbalacha, pkarampu, rgowdapp, srangana
Target Milestone: ---Keywords: Performance
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-03-25 16:33:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amar Tumballi 2019-01-28 13:12:32 UTC
Description of problem:

When glusterfs-master performance is compared with 3.12.15 release (ie, last of 3.12 series), we are finding a lot of regression on the master branch.


Version-Release number of selected component (if applicable):
master

How reproducible:
100%

Steps to Reproduce:
1. run the gbench tests.
2.
3.

Comment 1 Worker Ant 2019-01-28 13:14:33 UTC
REVIEW: https://review.gluster.org/22107 (features/sdfs: disable by default) posted (#1) for review on master by Amar Tumballi

Comment 2 Worker Ant 2019-01-29 13:55:07 UTC
REVIEW: https://review.gluster.org/22107 (features/sdfs: disable by default) merged (#3) on master by Atin Mukherjee

Comment 3 Atin Mukherjee 2019-01-29 13:56:17 UTC
It might be worth to open a different bugzilla to track the perf regression for sdfs only considering this bug is used as a tracker.

Comment 4 Worker Ant 2019-02-04 05:14:45 UTC
REVIEW: https://review.gluster.org/22120 (inode: Reduce work load of inode_table->lock section) posted (#6) for review on master by Amar Tumballi

Comment 5 Worker Ant 2019-02-05 15:16:51 UTC
REVIEW: https://review.gluster.org/22156 (inode: granular locking) posted (#1) for review on master by Amar Tumballi

Comment 6 Worker Ant 2019-02-09 08:24:05 UTC
REVIEW: https://review.gluster.org/22183 (inode: create inode outside locked region) posted (#1) for review on master by Amar Tumballi

Comment 7 Worker Ant 2019-02-09 08:25:08 UTC
REVIEW: https://review.gluster.org/22184 (inode: make critical section smaller) posted (#1) for review on master by Amar Tumballi

Comment 8 Worker Ant 2019-02-09 11:51:34 UTC
REVIEW: https://review.gluster.org/22185 (inode: dentry_destroy outside of dentry_unset) posted (#1) for review on master by Amar Tumballi

Comment 9 Worker Ant 2019-02-11 04:48:59 UTC
REVIEW: https://review.gluster.org/22186 (inode: don't take lock on whole table during ref/unref) posted (#1) for review on master by Amar Tumballi

Comment 10 Worker Ant 2019-02-11 09:36:13 UTC
REVIEW: https://review.gluster.org/22188 (inode: do only required checks inside critical section.) posted (#1) for review on master by Amar Tumballi

Comment 11 Worker Ant 2019-02-11 11:07:13 UTC
REVIEW: https://review.gluster.org/22183 (inode: create inode outside locked region) merged (#4) on master by Amar Tumballi

Comment 12 Worker Ant 2019-02-13 17:33:05 UTC
REVIEW: https://review.gluster.org/22184 (inode: make critical section smaller) merged (#10) on master by Amar Tumballi

Comment 13 Worker Ant 2019-02-20 14:04:22 UTC
REVIEW: https://review.gluster.org/22242 (inode: reduce inode-path execution time) posted (#1) for review on master by Amar Tumballi

Comment 14 Worker Ant 2019-02-20 19:32:01 UTC
REVIEW: https://review.gluster.org/22243 (inode: handle list management outside of ref/unref code) posted (#1) for review on master by Amar Tumballi

Comment 15 Nithya Balachandran 2019-03-11 10:48:28 UTC
Has git bisect been used to narrow down the patches that have caused the regression? The inode code has not changed in a long time so this is unlikely to be the cause of the slowdown.

Comment 16 Amar Tumballi 2019-03-11 13:33:17 UTC
> Has git bisect been used to narrow down the patches that have caused the regression? The inode code has not changed in a long time so this is unlikely to be the cause of the slowdown.

Ack, inode code is surely not the reason. The code/features identified as reasons for some of the regressions were:

* no-root-squash PID for mkdir-layout set code in DHT (for mkdir)
* gfid2path xattr setting (for rename)
* ctime setting (for rmdir and few other entry ops, which seemed minor).

Comment 17 Shyamsundar 2019-03-25 16:33:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 18 Worker Ant 2019-05-15 12:47:46 UTC
REVISION POSTED: https://review.gluster.org/22242 (inode: reduce inode-path execution time) posted (#3) for review on master by Amar Tumballi