Description of problem: ************************** There may be a need of turning off the perf-xlator options for gluster-block volume as it may cause data-inconsistency while accessing from initiators. So we need to have group profile option to add the required volume options for ex. perf xlators. Also server.allow-insecure: on can be added. Version-Release number of selected component (if applicable): ********************************* gluster-block-0.2-1.x86_64 How reproducible: ********************* Always Steps to Reproduce: 1. Need to add options manually 2. 3. Actual results: ******************* Need to add all the options manually. Expected results: *************************** Better to have a volume group profile and add all the required options. Additional info:
Please ensure the group profile option gets documented in gluster-block section. You can raise a doc bug if you want to track it separately.
Tested this on the build glusterfs-3.8.4-31. I had a 1*3 volume 'nash' on which I executed the command 'gluster volume set nash group gluster-block' - which should have enabled 17 options (as per the patch https://review.gluster.org/#/c/17254/3/extras/group-gluster-block). I see all options correctly enabled except for 1 'performance.write-behind'. 'performance.write-behind' should have ideally got switched to 'off' but it remains 'on'. [root@dhcp47-121 ~]# gluster v create testvol replica 3 10.70.47.121:/bricks/brick2/testvol0 10.70.47.113:/bricks/brick2/testvol1 10.70.47.114:/bricks/brick2/testvol2 volume create: testvol: success: please start the volume to access data [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# gluster v info testvol Volume Name: testvol Type: Replicate Volume ID: 35a0b1a7-0dc3-4536-96aa-bd181b91c381 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.47.121:/bricks/brick2/testvol0 Brick2: 10.70.47.113:/bricks/brick2/testvol1 Brick3: 10.70.47.114:/bricks/brick2/testvol2 Options Reconfigured: transport.address-family: inet nfs.disable: on cluster.brick-multiplex: disable cluster.enable-shared-storage: enable [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# gluster v set testvol group gluster-block volume set: success [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# gluster v info testvol Volume Name: testvol Type: Replicate Volume ID: 35a0b1a7-0dc3-4536-96aa-bd181b91c381 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.47.121:/bricks/brick2/testvol0 Brick2: 10.70.47.113:/bricks/brick2/testvol1 Brick3: 10.70.47.114:/bricks/brick2/testvol2 Options Reconfigured: server.allow-insecure: on user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.readdir-ahead: off performance.open-behind: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on cluster.brick-multiplex: disable cluster.enable-shared-storage: enable [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# [root@dhcp47-121 ~]# gluster v get testvol all | grep performance.write-behind performance.write-behind-window-size 1MB performance.write-behind on [root@dhcp47-121 ~]# [root@dhcp47-121 ~]#
Tried this twice, on an already existing volume (with blocks created internally) and on a newly created volume. Both the times, it resulted in setting only 16 options out of the mentioned 17. Pranithk, thoughts? Am I missing something?
(In reply to Sweta Anandpara from comment #9) > Tried this twice, on an already existing volume (with blocks created > internally) and on a newly created volume. Both the times, it resulted in > setting only 16 options out of the mentioned 17. > > Pranithk, thoughts? Am I missing something? We had to remove performance.write-behind=off because of the bz: https://bugzilla.redhat.com/show_bug.cgi?id=1454313 Patch upstream: https://review.gluster.org/#/c/17387/
Missed updating this bz :-(. Sorry for the confusion.
Thanks Pranithk. Moving this BZ to verified after confirming the patch mentioned in comment 10.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774