Bug 1449226 - [gluster-block]:Need a volume group profile option for gluster-block volume to add necessary options to be added.
Summary: [gluster-block]:Need a volume group profile option for gluster-block volume t...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-block
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.3.0
Assignee: Pranith Kumar K
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1417151 1450010 1456224
TreeView+ depends on / blocked
 
Reported: 2017-05-09 12:47 UTC by surabhi
Modified: 2017-09-21 04:41 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-26
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1450010 (view as bug list)
Environment:
Last Closed: 2017-09-21 04:41:45 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description surabhi 2017-05-09 12:47:48 UTC
Description of problem:
**************************

There may be a need of turning off the perf-xlator options for gluster-block volume as it may cause data-inconsistency while accessing from initiators.
So we need to have group profile option to add the required volume options for ex. perf xlators.

Also server.allow-insecure: on can be added.



Version-Release number of selected component (if applicable):
*********************************
gluster-block-0.2-1.x86_64


How reproducible:
*********************
Always


Steps to Reproduce:
1. Need to add options manually
2.
3.

Actual results:
*******************
Need to add all the options manually.


Expected results:
***************************
Better to have a volume group profile and add all the required options.

Additional info:

Comment 3 Atin Mukherjee 2017-05-12 16:44:33 UTC
Please ensure the group profile option gets documented in gluster-block section. You can raise a doc bug if you want to track it separately.

Comment 8 Sweta Anandpara 2017-07-06 10:26:41 UTC
Tested this on the build glusterfs-3.8.4-31. 

I had a 1*3 volume 'nash' on which I executed the command 'gluster volume set nash group gluster-block' - which should have enabled 17 options (as per the patch https://review.gluster.org/#/c/17254/3/extras/group-gluster-block). I see all options correctly enabled except for 1 'performance.write-behind'. 

'performance.write-behind' should have ideally got switched to 'off' but it remains 'on'.

[root@dhcp47-121 ~]# gluster v create testvol replica 3 10.70.47.121:/bricks/brick2/testvol0 10.70.47.113:/bricks/brick2/testvol1 10.70.47.114:/bricks/brick2/testvol2
volume create: testvol: success: please start the volume to access data
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v info testvol
 
Volume Name: testvol
Type: Replicate
Volume ID: 35a0b1a7-0dc3-4536-96aa-bd181b91c381
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.121:/bricks/brick2/testvol0
Brick2: 10.70.47.113:/bricks/brick2/testvol1
Brick3: 10.70.47.114:/bricks/brick2/testvol2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.brick-multiplex: disable
cluster.enable-shared-storage: enable
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v set testvol group gluster-block
volume set: success
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v info testvol
 
Volume Name: testvol
Type: Replicate
Volume ID: 35a0b1a7-0dc3-4536-96aa-bd181b91c381
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.121:/bricks/brick2/testvol0
Brick2: 10.70.47.113:/bricks/brick2/testvol1
Brick3: 10.70.47.114:/bricks/brick2/testvol2
Options Reconfigured:
server.allow-insecure: on
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.readdir-ahead: off
performance.open-behind: off
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
cluster.brick-multiplex: disable
cluster.enable-shared-storage: enable
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]# gluster v get testvol all | grep performance.write-behind
performance.write-behind-window-size    1MB                                     
performance.write-behind                on                                      
[root@dhcp47-121 ~]# 
[root@dhcp47-121 ~]#

Comment 9 Sweta Anandpara 2017-07-06 10:29:09 UTC
Tried this twice, on an already existing volume (with blocks created internally) and on a newly created volume. Both the times, it resulted in setting only 16 options out of the mentioned 17. 

Pranithk, thoughts? Am I missing something?

Comment 10 Pranith Kumar K 2017-07-06 10:36:53 UTC
(In reply to Sweta Anandpara from comment #9)
> Tried this twice, on an already existing volume (with blocks created
> internally) and on a newly created volume. Both the times, it resulted in
> setting only 16 options out of the mentioned 17. 
> 
> Pranithk, thoughts? Am I missing something?

We had to remove performance.write-behind=off because of the bz: https://bugzilla.redhat.com/show_bug.cgi?id=1454313

Patch upstream:
https://review.gluster.org/#/c/17387/

Comment 11 Pranith Kumar K 2017-07-06 10:37:47 UTC
Missed updating this bz :-(. Sorry for the confusion.

Comment 12 Sweta Anandpara 2017-07-06 11:11:14 UTC
Thanks Pranithk. Moving this BZ to verified after confirming the patch mentioned in comment 10.

Comment 14 errata-xmlrpc 2017-09-21 04:41:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.