Bug 1475605 - gluster-block default shard-size should be 64MB
Summary: gluster-block default shard-size should be 64MB
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1476654
TreeView+ depends on / blocked
 
Reported: 2017-07-27 00:38 UTC by Pranith Kumar K
Modified: 2017-12-08 17:35 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.13.0
Clone Of:
: 1476654 (view as bug list)
Environment:
Last Closed: 2017-12-08 17:35:59 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2017-07-27 00:38:33 UTC
Description of problem:
    With 4MB shard size I/O slows down more because of the extra
    inodelk/xattrops in replicate. So increasing it to 64MB which
    gave better performance than 4MB.
    
    To simulate writes on a preallocated VM-image, fallocate the file and then do
    dd with notrunc
    do "fallocate -l 1GB" then "dd if=/dev/zero of=file-1GB bs=1MB count=1024 conv=notrunc"
    
    These are the results on my laptop for dd:
    With 4MB:
          1.84    1357.37 us      19.00 us   12431.00 us           1188    FINODELK
          2.45     255.08 us      58.00 us    4038.00 us           8428       WRITE
         95.69   78967.76 us      30.00 us 20324240.00 us           1063    FXATTROP
    
    With 64MB:
          0.13      59.36 us      15.00 us     814.00 us            657    FINODELK
          6.02     225.53 us      69.00 us    6556.00 us           8205       WRITE
         93.82  103015.12 us      32.00 us 13046368.00 us            280    FXATTROP


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2017-07-27 00:39:34 UTC
REVIEW: https://review.gluster.org/17887 (group-gluster-block: Set default shard-block-size to 4MB) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Worker Ant 2017-07-27 00:40:53 UTC
REVIEW: https://review.gluster.org/17887 (group-gluster-block: Set default shard-block-size to 64MB) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Worker Ant 2017-07-27 10:39:57 UTC
COMMIT: https://review.gluster.org/17887 committed in master by Krutika Dhananjay (kdhananj) 
------
commit abfbc3eb821e144ddbfdc5d7da401557b52beaf1
Author: Pranith Kumar K <pkarampu>
Date:   Wed Jul 26 20:27:08 2017 +0530

    group-gluster-block: Set default shard-block-size to 64MB
    
    With 4MB shard size I/O slows down more because of the extra
    inodelk/xattrops in replicate. So increasing it to 64MB which
    gave better performance than 4MB.
    
    To simulate writes on a preallocated VM-image, fallocate the file and then do
    dd with notrunc
    do "fallocate -l 1GB" then "dd if=/dev/zero of=file-1GB bs=1MB count=1024 conv=notrunc"
    
    These are the results on my laptop for dd:
    With 4MB:
          1.84    1357.37 us      19.00 us   12431.00 us           1188    FINODELK
          2.45     255.08 us      58.00 us    4038.00 us           8428       WRITE
         95.69   78967.76 us      30.00 us 20324240.00 us           1063    FXATTROP
    
    With 64MB:
          0.13      59.36 us      15.00 us     814.00 us            657    FINODELK
          6.02     225.53 us      69.00 us    6556.00 us           8205       WRITE
         93.82  103015.12 us      32.00 us 13046368.00 us            280    FXATTROP
    
    BUG: 1475605
    Change-Id: I4ed5441409df639e38c731ba0d140fe92902f25f
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: https://review.gluster.org/17887
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Krutika Dhananjay <kdhananj>

Comment 4 Shyamsundar 2017-12-08 17:35:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.