Bug 1469436 - set the shard-block-size to recommended value
Summary: set the shard-block-size to recommended value
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: cockpit-ovirt
Classification: oVirt
Component: Gdeploy
Version: 0.10.7-0.0.20
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ovirt-4.1.5
: 0.10.7-0.0.23
Assignee: Gobinda Das
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-3 1480607 RHHI-1.1-RFEs
TreeView+ depends on / blocked
 
Reported: 2017-07-11 09:27 UTC by SATHEESARAN
Modified: 2017-08-28 10:33 UTC (History)
4 users (show)

Fixed In Version: cockpit-ovirt-0.10.7-0.0.23
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-23 08:01:50 UTC
oVirt Team: Gluster
Embargoed:
sasundar: ovirt-4.1?
sasundar: planning_ack?
rule-engine: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 80459 0 master MERGED Set shard-block-size to 64MB default 2020-11-11 08:09:55 UTC
oVirt gerrit 80460 0 ovirt-4.1 MERGED Set shard-block-size to 64MB default 2020-11-11 08:09:56 UTC

Description SATHEESARAN 2017-07-11 09:27:44 UTC
Description of problem:
-----------------------
shard-block-size option that is set on the engine, vmstore, data volume was left to the default.

Need to add this option on all the volumes in the generated gdeploy config file

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
cockpit-ovirt-dashboard-0.10.7-0.0.20

How reproducible:
-----------------
NA

Steps to Reproduce:
--------------------
Not applicable

Actual results:
---------------
shard-block-size is not set to the recommended value from the generated gdeploy config file

Expected results:
-----------------
shard-block-size should be set to the recommended value from the generated gdeploy config file

Additional info:
----------------
Example of the change in generated gdeploy config file:

[volume1]
action=create
volname=engine
..
..
key=......,features.shard-block-size
value=.......,128MB

[volume2]
action=create
volname=vmstore
key=......,features.shard-block-size
value=.......,128MB

[volume3]
action=create
volname=data
key=......,features.shard-block-size
value=.......,128MB

Comment 3 Sahina Bose 2017-08-09 11:41:38 UTC
Gobinda, change the default in Cockpit to use 64MB

Comment 4 RamaKasturi 2017-08-16 12:32:32 UTC
Verified and works fine with build cockpit-ovirt-dashboard-0.10.7-0.0.23.el7ev.noarch.

I see that shard size in the generated gdeploy conf file has been set to 64MB.

key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size,server.ssl,client.ssl,auth.ssl-allow
value=virt,36,36,30,on,off,enable,64MB,on,on, "H1;H2;H3"

gluster volume info vmstore:
===================================
Volume Name: vmstore
Type: Replicate
Volume ID: 6473f7c8-0c3a-433f-b49e-2923d25fa62a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.79:/gluster_bricks/vmstore/vmstore
Brick2: 10.70.36.80:/gluster_bricks/vmstore/vmstore
Brick3: 10.70.36.81:/gluster_bricks/vmstore/vmstore (arbiter)
Options Reconfigured:
auth.ssl-allow: 10.70.36.79,10.70.36.80,10.70.36.81
client.ssl: on
server.ssl: on
features.shard-block-size: 64MB
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on

volume info engine:
===========================
[root@rhsqa-grafton1 ~]# gluster volume info engine
 
Volume Name: engine
Type: Replicate
Volume ID: 749295cd-1b6b-44cb-b53e-ae095fd9f641
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.79:/gluster_bricks/engine/engine
Brick2: 10.70.36.80:/gluster_bricks/engine/engine
Brick3: 10.70.36.81:/gluster_bricks/engine/engine
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
features.shard-block-size: 64MB
client.ssl: on
server.ssl: on
auth.ssl-allow: 10.70.36.80,10.70.36.79,10.70.36.81

volume info data:
==================================================
[root@rhsqa-grafton1 ~]# gluster volume info data
 
Volume Name: data
Type: Replicate
Volume ID: 85b8a9d2-8ba7-4f51-994a-64f50f079335
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.79:/gluster_bricks/data/data
Brick2: 10.70.36.80:/gluster_bricks/data/data
Brick3: 10.70.36.81:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
auth.ssl-allow: 10.70.36.79,10.70.36.80,10.70.36.81
client.ssl: on
server.ssl: on
features.shard-block-size: 64MB
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on


Note You need to log in before you can comment on or make changes to this bug.