+++ This bug was initially created as a clone of Bug #1202669 +++ +++ This bug was initially created as a clone of Bug #1202541 +++ Description of problem: ls -l performance has been greatly reduced on replicated volumes. I do not see this perf hit on distributed volumes. Version-Release number of selected component (if applicable): How reproducible: Every time. Steps to Reproduce: 1. Create 10k files 2. Clear buffers and cache on both servers and clients 3. Run ls -l on files Actual results: Expected results: Additional info: --- Additional comment from Ben Turner on 2015-03-16 18:51:55 EDT --- The commit that causes perf degrade has been found to be the following: <commit details> Author: Krutika Dhananjay <kdhananj> Date: Thu Jan 22 17:02:20 2015 +0530 cluster/afr: When parent and entry read subvols are different, set entry->inode to NULL </commit details> --- Additional comment from Krutika Dhananjay on 2015-03-17 03:19:56 EDT --- Let me briefly explain what the problem with AFR was: In AFR, every file has a "read child" associated with it. Read operations on a file (like readv under the data read category, getxattr, stat etc under metadata read category and readdirp under entry read category) are always served from the designated read child of the file/dir unless it contains the bad copy of the file (i.e., in need of a self-heal). I can think of atleast two reasons why this is useful: a. it is sufficient to serve reads from only one of the copies of a file, since all copies are identical under normal circumstances. b. Certain attributes like mtime/ctime/atime might differ across different copies of a file on different bricks due to clock skew across the servers. In these cases, it is good to always return the same values across consecutive requests for these attributes, because we do not want the application to mistakenly think that the file underwent some change just because AFR returned different values of timestamps across different calls. The problem case is readdirp. Readdirp also fetches the attributes of the entries read. A directory on which readdirp is performed can have a read child x while some of the entries read could have read child y. This means that readdirp may violate (b) above in terms of giving timestamps from a copy of the entry which is not from its read child. The other problem with this behavior is that, even if the directory's read child had a bad copy of some of the entries, this will not be detected. My patches fixed these issues with readdirp by forcing a lookup on those read entries whose read child did not match that of the parent. This would have led to some extra lookups. Both the BZs above were manifestations of the same bug in readdirp just described. [UPDATE] It was decided in the meeting that the behavior introduced by the AFR patch is to be made optional. And by default, the behavior would be "off".
REVIEW: http://review.gluster.org/9929 (cluster/afr: Make read child match check in afr optional) posted (#1) for review on release-3.6 by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/9929 (cluster/afr: Make read child match check in afr optional) posted (#2) for review on release-3.6 by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/9929 committed in release-3.6 by Raghavendra Bhat (raghavendra) ------ commit 6477c13c63e181dec4f034d8d25435026550d93a Author: Krutika Dhananjay <kdhananj> Date: Tue Mar 17 13:16:45 2015 +0530 cluster/afr: Make read child match check in afr optional Backport of: http://review.gluster.org/9917 Having this particular check which was introduced by commit c57c455347a72ebf0085add49ff59aae26c7a70d causes a drop in performance in readdirp. So the behavior is made configurable with this patch. Change-Id: I4a19813cfc786504340264a5a5533a0c43a1d4a4 BUG: 1202673 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/9929 Reviewed-by: Atin Mukherjee <amukherj> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Raghavendra Bhat <raghavendra>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v3.6.3, please open a new bug report. glusterfs-v3.6.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2015-April/021669.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user