Bug 1317785 - Cache swift xattrs
Summary: Cache swift xattrs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: md-cache
Version: mainline
Hardware: All
OS: All
urgent
low
Target Milestone: ---
Assignee: Prashanth Pai
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1317788 1317790
TreeView+ depends on / blocked
 
Reported: 2016-03-15 08:32 UTC by Prashanth Pai
Modified: 2016-06-16 14:00 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1317788 (view as bug list)
Environment:
Last Closed: 2016-06-16 14:00:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
tcpdump trace of DELETE object operation from gluster-swift (7.73 KB, application/octet-stream)
2016-03-15 08:34 UTC, Prashanth Pai
no flags Details
tcpdump after application of patch (6.80 KB, application/octet-stream)
2016-03-15 09:03 UTC, Prashanth Pai
no flags Details

Description Prashanth Pai 2016-03-15 08:32:31 UTC
Description of problem:

gluster-swift extensively uses "user.swift.metadata" xattr in it's functionality. This is not cached and hence the performance is usually terrible.

Every operation (GET,PUT,POST,DELETE,HEAD) in gluster-swift issues getxattr() as follows:

getxattr("/mnt/gluster-object/test/c1/o8", "user.swift.metadata", 0x0, 0) = 190
getxattr("/mnt/gluster-object/test/c1/o8", "user.swift.metadata", "{"Content-Length":"11","ETag":"5eb63bbbe01eeed093cb22bb8f5acdc3","X-Timestamp":"1458021187.45380","X-Object-Type":"file","X-Type":"Object","Content-Type":"application/x-www-form-urlencoded"}", 190) = 190

The first getxattr() is to get the size of xattr value and the second getxattr() is to fetch the actual value itself.

The entire xattr information is sent thrice by brick to FUSE mount. Once on lookup, again on the first getxattr() to get size and then again on second gexattr() call. These three network calls can be reduced to just one if the xattr information is cached when it's fetched for the first time.

Comment 1 Prashanth Pai 2016-03-15 08:34:43 UTC
Created attachment 1136420 [details]
tcpdump trace of DELETE object operation from gluster-swift

Attached tcpdump trace of DELETE object operation from gluster-swift

Comment 2 Vijay Bellur 2016-03-15 09:01:57 UTC
REVIEW: http://review.gluster.org/13735 (md-cache: Cache gluster-swift metadata) posted (#1) for review on master by Prashanth Pai (ppai)

Comment 3 Prashanth Pai 2016-03-15 09:03:02 UTC
Created attachment 1136461 [details]
tcpdump after application of patch

Comment 4 Vijay Bellur 2016-03-15 09:04:28 UTC
REVIEW: http://review.gluster.org/13735 (md-cache: Cache gluster-swift metadata) posted (#2) for review on master by Prashanth Pai (ppai)

Comment 5 Vijay Bellur 2016-03-15 13:00:39 UTC
REVIEW: http://review.gluster.org/13735 (md-cache: Cache gluster-swift metadata) posted (#3) for review on master by Prashanth Pai (ppai)

Comment 6 Vijay Bellur 2016-03-16 07:00:54 UTC
REVIEW: http://review.gluster.org/13735 (md-cache: Cache gluster-swift metadata) posted (#4) for review on master by Prashanth Pai (ppai)

Comment 7 Vijay Bellur 2016-03-16 11:23:54 UTC
COMMIT: http://review.gluster.org/13735 committed in master by Jeff Darcy (jdarcy) 
------
commit 500ad8f3a72053d33120657e8a2e93d844041cf0
Author: Prashanth Pai <ppai>
Date:   Tue Mar 15 14:21:18 2016 +0530

    md-cache: Cache gluster-swift metadata
    
    BUG: 1317785
    Change-Id: Ie02b8fc294802f8fdf49dee8bf97f1e6177d92bd
    Signed-off-by: Prashanth Pai <ppai>
    Reviewed-on: http://review.gluster.org/13735
    Smoke: Gluster Build System <jenkins.com>
    Reviewed-by: Poornima G <pgurusid>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>
    Reviewed-by: Gaurav Kumar Garg <ggarg>

Comment 8 Niels de Vos 2016-06-16 14:00:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.