Bug 1656348 - Commit c9bde3021202f1d5c5a2d19ac05a510fc1f788ac causes ls slowdown
Summary: Commit c9bde3021202f1d5c5a2d19ac05a510fc1f788ac causes ls slowdown
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: unclassified
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-05 10:08 UTC by Nithya Balachandran
Modified: 2019-03-25 16:32 UTC (History)
3 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-07 08:07:13 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21811 0 None Merged performance/readdir-ahead: update stats from prefetched dentries 2018-12-07 06:57:26 UTC

Description Nithya Balachandran 2018-12-05 10:08:20 UTC
Description of problem:

ls on a mount point on which files were created takes 5x slower than ls on the same volume from a fresh mount point.


git bisect points to commit c9bde3021202f1d5c5a2d19ac05a510fc1f788ac as the culprit.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a single brick volume and fuse mount it on 2 different mount points (/mnt/1 and /mnt/2)
2.From /mnt/1:
for i in {1..20000}; do echo "let's scale" > xfile-$i; done
3.Run the following from both mounts
time ls -l |wc


Actual results:

On /mnt/1:
[root@rhgs313-7 g1]# time ls -l |wc
  20001  180002 1028906

real	0m7.663s
user	0m0.163s
sys	0m0.326s

On /mnt/2:
[root@rhgs313-7 g2]# time ls -l|wc
  20001  180002 1028906

real	0m1.351s
user	0m0.147s
sys	0m0.183s


Expected results:
ls times should be comparable


Additional info:

Subsequent ls runs on /mnt/1 do not show improved performance until the client is remounted.

Comment 1 Raghavendra G 2018-12-06 04:52:09 UTC
readdir-ahead stores that writes have happened to make sure to handle in-progress prefetch and writes concurrent with that prefetching. But the problem is that information is never cleared or stats updated with new values and hence causing this regression. concurrent writes need more intelligent handling than the current logic.

Comment 2 Worker Ant 2018-12-06 08:00:09 UTC
REVIEW: https://review.gluster.org/21811 (performance/readdir-ahead: update stats from prefetched dentries) posted (#1) for review on master by Raghavendra G

Comment 3 Raghavendra G 2018-12-07 04:28:48 UTC
with fix [1]:
=============
[root@rgowdapp mnt]# rm -rf *
[root@rgowdapp mnt]# for i in {1..4000}; do echo  "test" > $i ; done
[root@rgowdapp mnt]# time ls -l > /dev/null

real	0m0.162s
user	0m0.012s
sys	0m0.033s
[root@rgowdapp mnt]# cd

without fix [1]:
================
[root@rgowdapp ~]# umount /mnt/
[root@rgowdapp ~]# mount -t glusterfs localhost:ptop /mnt
[root@rgowdapp ~]# cd /mnt
[root@rgowdapp mnt]# rm -rf *
[root@rgowdapp mnt]# for i in {1..4000}; do echo  "test" > $i ; done
[root@rgowdapp mnt]# time ls -l > /dev/null

real	0m0.789s
user	0m0.019s
sys	0m0.062s

[1] https://review.gluster.org/21811

Comment 4 Worker Ant 2018-12-07 06:57:25 UTC
REVIEW: https://review.gluster.org/21811 (performance/readdir-ahead: update stats from prefetched dentries) posted (#4) for review on master by Raghavendra G

Comment 5 Shyamsundar 2019-03-25 16:32:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.