Bug 1707704 - [geo-rep+gdeploy]: Creation of a session is not the recommended way
Summary: [geo-rep+gdeploy]: Creation of a session is not the recommended way
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Sachidananda Urs
QA Contact: Mugdha Soni
URL:
Whiteboard:
Depends On:
Blocks: 1475724 1696807
TreeView+ depends on / blocked
 
Reported: 2019-05-08 07:12 UTC by Rochelle
Modified: 2019-10-30 12:19 UTC (History)
4 users (show)

Fixed In Version: gdeploy-2.0.2-33.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-30 12:19:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3250 0 None None None 2019-10-30 12:19:19 UTC

Description Rochelle 2019-05-08 07:12:00 UTC
Description of problem:
=======================
While using gdeploy to set up my geo-rep session, the prequisites 
I took care of was having 2 clusters which had one volume each i.e; master
and slave. 
The master was a 3x3 volume and so was the slave.

I did not enable shared-storage on my master cluster as I thought the script would take care of enabling it. 

Also, i notice meta_volume was false.

We have always recommended to customers to have shared-storage enabled on the master cluster as well as setting meta_volume to true. 

Shared storage is required if 2 bricks of the same sub-vol fall into the same node. 
Meta-vol should be enabled by default.
If meta_vol is set to true and shared-storage is not enabled, the session will be FAULTY.



Version-Release number of selected component (if applicable):
===========================================================
gdeploy-2.0.2-32.el7rhgs.noarch


How reproducible:
=================
Always



Steps to Reproduce:
===================
1.Run the gdeploy file : gdeploy -c geo-replication.conf

2. Check gluster v list on master node ==> Only one volume-master
==> The master cluster should have the master node as well as shared storage volume created and mounted on all master nodes. (gluster v set all cluster.enable-shared-storage enable)

3.  gluster v geo-replication master 10.70.42.250::slave config use_meta_volume
false
==> This output should essentially be true. ( gluster v geo-replication master 10.70.42.250::slave config use_meta_volume true)



Actual results:
================
Shared storage is not enabled
Meta_volume is not set to true


Expected results:
=================
Shared storage needs to be enabled on the master cluster as part of geo-rep set up
meta_volume needs to be set to true as well before starting the session


Additional info:

Comment 5 SATHEESARAN 2019-05-14 11:29:33 UTC
Assigning this bug to correct QA contact

Comment 6 Sachidananda Urs 2019-05-16 14:54:14 UTC
https://github.com/gluster/gdeploy/pull/525

Comment 10 Mugdha Soni 2019-07-05 06:48:08 UTC
Tested with the following step :-
1. Created two clusters with three nodes each.
2. Created master and slave volume.
3. Created and started the session via used the conf file mentioned below:
       [hosts]
       10.70.35.188
     
       [geo-replication]
       action=create
       mastervol=10.70.35.188:master
       slavevol=10.70.35.26:slave
       slavenodes=10.70.35.26,10.70.35.18
       force=yes
       start=yes
 
4. Verified Shared storage is enabled on the master cluster.
5. Verified meta_volume was set to true before starting the session.
6. Shared storage volume was created.
    [root@dhcp35-188 ~]# gluster volume list
    gluster_shared_storage
    master
7.Paused a session ,status reflected as “Paused”
8.Resumed a session ,status reflected as "ACTIVE/PASSIVE"
9.Stoped a session ,status reflected as " Stop "
10.Started a session , status reflected as " ACTIVE/PASSIVE ".

GDEPLOY VERSION: gdeploy-2.0.2-34.el7rhgs.noarch
ANSIBLE VERSION: ansible-2.8.1-1.el7ae.noarch
GLUSTER VERSION: glusterfs-server-6.0-7.el7rhgs


Based on the above input moving the bug to verified.

Comment 12 errata-xmlrpc 2019-10-30 12:19:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3250


Note You need to log in before you can comment on or make changes to this bug.