This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1469436 - set the shard-block-size to recommended value
set the shard-block-size to recommended value
Status: CLOSED CURRENTRELEASE
Product: cockpit-ovirt
Classification: oVirt
Component: Gdeploy (Show other bugs)
0.10.7-0.0.20
x86_64 Linux
high Severity high
: ovirt-4.1.5
: 0.10.7-0.0.23
Assigned To: Gobinda Das
RamaKasturi
:
Depends On:
Blocks: Gluster-HC-3 RHHI-1.1-RFEs 1480607
  Show dependency treegraph
 
Reported: 2017-07-11 05:27 EDT by SATHEESARAN
Modified: 2017-08-28 06:33 EDT (History)
4 users (show)

See Also:
Fixed In Version: cockpit-ovirt-0.10.7-0.0.23
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-23 04:01:50 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Gluster
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
sasundar: ovirt‑4.1?
sasundar: planning_ack?
rule-engine: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 80459 master MERGED Set shard-block-size to 64MB default 2017-08-09 14:10 EDT
oVirt gerrit 80460 ovirt-4.1 MERGED Set shard-block-size to 64MB default 2017-08-09 14:10 EDT

  None (edit)
Description SATHEESARAN 2017-07-11 05:27:44 EDT
Description of problem:
-----------------------
shard-block-size option that is set on the engine, vmstore, data volume was left to the default.

Need to add this option on all the volumes in the generated gdeploy config file

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
cockpit-ovirt-dashboard-0.10.7-0.0.20

How reproducible:
-----------------
NA

Steps to Reproduce:
--------------------
Not applicable

Actual results:
---------------
shard-block-size is not set to the recommended value from the generated gdeploy config file

Expected results:
-----------------
shard-block-size should be set to the recommended value from the generated gdeploy config file

Additional info:
----------------
Example of the change in generated gdeploy config file:

[volume1]
action=create
volname=engine
..
..
key=......,features.shard-block-size
value=.......,128MB

[volume2]
action=create
volname=vmstore
key=......,features.shard-block-size
value=.......,128MB

[volume3]
action=create
volname=data
key=......,features.shard-block-size
value=.......,128MB
Comment 3 Sahina Bose 2017-08-09 07:41:38 EDT
Gobinda, change the default in Cockpit to use 64MB
Comment 4 RamaKasturi 2017-08-16 08:32:32 EDT
Verified and works fine with build cockpit-ovirt-dashboard-0.10.7-0.0.23.el7ev.noarch.

I see that shard size in the generated gdeploy conf file has been set to 64MB.

key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size,server.ssl,client.ssl,auth.ssl-allow
value=virt,36,36,30,on,off,enable,64MB,on,on, "H1;H2;H3"

gluster volume info vmstore:
===================================
Volume Name: vmstore
Type: Replicate
Volume ID: 6473f7c8-0c3a-433f-b49e-2923d25fa62a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.79:/gluster_bricks/vmstore/vmstore
Brick2: 10.70.36.80:/gluster_bricks/vmstore/vmstore
Brick3: 10.70.36.81:/gluster_bricks/vmstore/vmstore (arbiter)
Options Reconfigured:
auth.ssl-allow: 10.70.36.79,10.70.36.80,10.70.36.81
client.ssl: on
server.ssl: on
features.shard-block-size: 64MB
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on

volume info engine:
===========================
[root@rhsqa-grafton1 ~]# gluster volume info engine
 
Volume Name: engine
Type: Replicate
Volume ID: 749295cd-1b6b-44cb-b53e-ae095fd9f641
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.79:/gluster_bricks/engine/engine
Brick2: 10.70.36.80:/gluster_bricks/engine/engine
Brick3: 10.70.36.81:/gluster_bricks/engine/engine
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
features.shard-block-size: 64MB
client.ssl: on
server.ssl: on
auth.ssl-allow: 10.70.36.80,10.70.36.79,10.70.36.81

volume info data:
==================================================
[root@rhsqa-grafton1 ~]# gluster volume info data
 
Volume Name: data
Type: Replicate
Volume ID: 85b8a9d2-8ba7-4f51-994a-64f50f079335
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.79:/gluster_bricks/data/data
Brick2: 10.70.36.80:/gluster_bricks/data/data
Brick3: 10.70.36.81:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
auth.ssl-allow: 10.70.36.79,10.70.36.80,10.70.36.81
client.ssl: on
server.ssl: on
features.shard-block-size: 64MB
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on

Note You need to log in before you can comment on or make changes to this bug.