Bug 993981 - Fuse: Performance is drastically degraded
Summary: Fuse: Performance is drastically degraded
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: locks
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On: 993583
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-06 13:33 UTC by Pranith Kumar K
Modified: 2014-04-17 11:45 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 993583
Environment:
Last Closed: 2014-04-17 11:45:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2013-08-06 13:37:09 UTC
REVIEW: http://review.gluster.org/5503 (features/locks: Convert old style metadata locks to new-style) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2013-08-07 10:34:00 UTC
COMMIT: http://review.gluster.org/5503 committed in master by Anand Avati (avati) 
------
commit c6a555d1268c667b72728ffa58600fc0632465e4
Author: Pranith Kumar K <pkarampu>
Date:   Tue Aug 6 17:40:05 2013 +0530

    features/locks: Convert old style metadata locks to new-style
    
    Problem:
    In 3.3, inode locks of both metadata and data are competing in same
    domain called data domain (old style). This coupled with eager-lock,
    delayed post-ops introduce delays for metadata operations like chmod,
    chown etc. To avoid this problem, inode locks for metadata ops are
    moved to different domain called metadata domain in 3.4 (new style).
    But when both 3.3 clients and 3.4 clients are present, 3.4 clients
    for metadata operations still need to take locks in "old style" so
    that proper synchronization happens across 3.3 and 3.4 clients. Only
    when all clients are >= 3.4 locks will be taken in "new style" for
    metadata locks. Because of this behavior as long as at least one 3.3
    client is present, delays will be perceived for doing metadata
    operations on all 3.4 clients while data operations are in
    progress (Ex: Untar will untar one file per sec).
    
    Fix:
    Make locks xlators translate old-style metadata locks to new-style
    metadata locks. Since upgrade process suggests upgrading servers
    first and then clients, this approach gives good results.
    
    Tests:
    1) Tested that old style metadata locks are converted to new style by
       locks xlator using gdb
    2) Tested that disconnects purge locks in meta-data domain as well
       using gdb and statedumps.
    3) Tested that untar performance is not hampered by meta-data and
       data operations.
    4) Had two mounts one with orthogonal-meta-data on and other with
       orthogonal-meta-data off ran chmod 777 <file> on one mount and
       chmod 555 <file> on the other mount in while loops when I took
       statedumps I saw that both the transports are taking lock on
       same domain with same range.
    
       18:49:30 :) ⚡ sudo grep -B1 "ACTIVE" /usr/local/var/run/gluster/home-gfs-r2_0.324.dump.*
       home-gfs-r2_0.324.dump.1375794971-lock-dump.domain.domain=r2-replicate-0:metadata
       home-gfs-r2_0.324.dump.1375794971:inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=9223372036854775806, len=0, pid = 7525, owner=78f9e652497f0000, transport=0x15ac9e0, , granted at Tue Aug  6 18:46:11 2013
    
       home-gfs-r2_0.324.dump.1375795051-lock-dump.domain.domain=r2-replicate-0:metadata
       home-gfs-r2_0.324.dump.1375795051:inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=9223372036854775806, len=0, pid = 8879, owner=0019cc3cad7f0000, transport=0x158f580, , granted at Tue Aug  6 18:47:31 2013
    
    Change-Id: I268df4efd93a377a0c73fbc59b739ef12a7a8bb6
    BUG: 993981
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/5503
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 3 Anand Avati 2013-10-03 05:11:20 UTC
REVIEW: http://review.gluster.org/6025 (cluster/afr: Change Self-heal domain separator to ':') posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Anand Avati 2013-10-03 06:24:47 UTC
REVIEW: http://review.gluster.org/6025 (cluster/afr: Change Self-heal domain separator to ':') posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 5 Anand Avati 2013-10-03 06:24:55 UTC
REVIEW: http://review.gluster.org/6028 (Tests: Enable fore-ground self-heal) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 6 Anand Avati 2013-10-04 04:28:20 UTC
COMMIT: http://review.gluster.org/6025 committed in master by Anand Avati (avati) 
------
commit c32db94e29f1c20d7eede05c7c6ad7657771aaa4
Author: Pranith Kumar K <pkarampu>
Date:   Thu Oct 3 10:28:47 2013 +0530

    cluster/afr: Change Self-heal domain separator to ':'
    
    '-' can be present in a volume. This may lead to domain
    collisions in future.
    
    Tests:
    Checked in gdb that domain comes with ':' separator:
    
    Breakpoint 1, pl_common_inodelk (frame=0x7fdabcce51a4,
    this=0x8bde20, volume=0x8b50d0 "r2-replicate-0:self-heal",
    inode=0x7fdab822f0e8, cmd=6, flock=0x7fdabc76eee4,
    loc=0x7fdabc76ede4, fd=0x0, xdata=0x7fdabc6e0ab0) at inodelk.c:597
    
    Change-Id: I4456ae35ac8bf21e6361c34e9ad437f744a2e84b
    BUG: 993981
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/6025
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 7 Anand Avati 2013-10-04 04:28:34 UTC
COMMIT: http://review.gluster.org/6028 committed in master by Anand Avati (avati) 
------
commit a25bd2d7695760c9fe35fec39065c9326f2952d6
Author: Pranith Kumar K <pkarampu>
Date:   Thu Oct 3 11:52:53 2013 +0530

    Tests: Enable fore-ground self-heal
    
    Change-Id: Ibfca8ddb7c663d44ed447be13b2eabb7bd393bb3
    BUG: 993981
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/6028
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 8 Niels de Vos 2014-04-17 11:45:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.