Bug 1450010 - [gluster-block]:Need a volume group profile option for gluster-block volume to add necessary options to be added.
Summary: [gluster-block]:Need a volume group profile option for gluster-block volume t...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1449226
Blocks: 1456224
TreeView+ depends on / blocked
 
Reported: 2017-05-11 11:32 UTC by Pranith Kumar K
Modified: 2017-09-05 17:30 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.12.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1449226
: 1456224 (view as bug list)
Environment:
Last Closed: 2017-09-05 17:30:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Pranith Kumar K 2017-05-11 11:35:34 UTC
Considering that we want all the block files to be written without any caches we are disabling all the perf xlators. We are also enabling some of the options that proved to be good candidates in virt profile. Final list at the moment looks to be:
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.stat-prefetch=off
performance.write-behind=off
performance.open-behind=off
performance.readdir-ahead=off
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
cluster.data-self-heal-algorithm=full
cluster.locking-scheme=granular
cluster.shd-max-threads=8
cluster.shd-wait-qlength=10000
features.shard=on
user.cifs=off
server.allow-insecure=on

Comment 2 Worker Ant 2017-05-11 11:36:58 UTC
REVIEW: https://review.gluster.org/17254 (extras: Provide group set for block workloads) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Worker Ant 2017-05-11 16:34:27 UTC
REVIEW: https://review.gluster.org/17254 (extras: Provide group set for gluster-block workloads) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Worker Ant 2017-05-12 13:21:31 UTC
COMMIT: https://review.gluster.org/17254 committed in master by Jeff Darcy (jeff.us) 
------
commit ef61a79f33ca43a9548b9076bb152e6421416f78
Author: Pranith Kumar K <pkarampu>
Date:   Wed May 10 16:26:35 2017 +0530

    extras: Provide group set for gluster-block workloads
    
    For gluster-block workloads I/O is always with o-direct so it doesn't
    benefit by any of the perf xlators so disabling all of them to save
    on memory.
    performance.quick-read=off
    performance.read-ahead=off
    performance.io-cache=off
    performance.stat-prefetch=off
    performance.write-behind=off
    performance.open-behind=off
    performance.readdir-ahead=off
    
    We want the I/O on the file to be with o-direct
    network.remote-dio=enable
    
    Options that are proven to give good performance with
    VM workloads which is very similar to gluster-block
    cluster.eager-lock=enable
    cluster.quorum-type=auto
    cluster.data-self-heal-algorithm=full
    cluster.locking-scheme=granular
    cluster.shd-max-threads=8
    cluster.shd-wait-qlength=10000
    features.shard=on
    
    It is better to turn off things we are not using
    user.cifs=off
    
    It is better to have allow-insecure to be on so that
    ports that are > 1024 in tcmu-runner are allowed.
    server.allow-insecure=on
    
    Change-Id: I9a21c824fa42242f02b57569feedd03d9b6f9439
    BUG: 1450010
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: https://review.gluster.org/17254
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Niels de Vos <ndevos>
    CentOS-regression: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Jeff Darcy <jeff.us>

Comment 5 Shyamsundar 2017-09-05 17:30:07 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.