Description of problem: ======================= While using gdeploy to set up my geo-rep session, the prequisites I took care of was having 2 clusters which had one volume each i.e; master and slave. The master was a 3x3 volume and so was the slave. I did not enable shared-storage on my master cluster as I thought the script would take care of enabling it. Also, i notice meta_volume was false. We have always recommended to customers to have shared-storage enabled on the master cluster as well as setting meta_volume to true. Shared storage is required if 2 bricks of the same sub-vol fall into the same node. Meta-vol should be enabled by default. If meta_vol is set to true and shared-storage is not enabled, the session will be FAULTY. Version-Release number of selected component (if applicable): =========================================================== gdeploy-2.0.2-32.el7rhgs.noarch How reproducible: ================= Always Steps to Reproduce: =================== 1.Run the gdeploy file : gdeploy -c geo-replication.conf 2. Check gluster v list on master node ==> Only one volume-master ==> The master cluster should have the master node as well as shared storage volume created and mounted on all master nodes. (gluster v set all cluster.enable-shared-storage enable) 3. gluster v geo-replication master 10.70.42.250::slave config use_meta_volume false ==> This output should essentially be true. ( gluster v geo-replication master 10.70.42.250::slave config use_meta_volume true) Actual results: ================ Shared storage is not enabled Meta_volume is not set to true Expected results: ================= Shared storage needs to be enabled on the master cluster as part of geo-rep set up meta_volume needs to be set to true as well before starting the session Additional info:
Assigning this bug to correct QA contact
https://github.com/gluster/gdeploy/pull/525
Tested with the following step :- 1. Created two clusters with three nodes each. 2. Created master and slave volume. 3. Created and started the session via used the conf file mentioned below: [hosts] 10.70.35.188 [geo-replication] action=create mastervol=10.70.35.188:master slavevol=10.70.35.26:slave slavenodes=10.70.35.26,10.70.35.18 force=yes start=yes 4. Verified Shared storage is enabled on the master cluster. 5. Verified meta_volume was set to true before starting the session. 6. Shared storage volume was created. [root@dhcp35-188 ~]# gluster volume list gluster_shared_storage master 7.Paused a session ,status reflected as “Paused” 8.Resumed a session ,status reflected as "ACTIVE/PASSIVE" 9.Stoped a session ,status reflected as " Stop " 10.Started a session , status reflected as " ACTIVE/PASSIVE ". GDEPLOY VERSION: gdeploy-2.0.2-34.el7rhgs.noarch ANSIBLE VERSION: ansible-2.8.1-1.el7ae.noarch GLUSTER VERSION: glusterfs-server-6.0-7.el7rhgs Based on the above input moving the bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3250