Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1572043 - [Geo-rep]: Status in ACTIVE/Created state
[Geo-rep]: Status in ACTIVE/Created state
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity high
: ---
: RHGS 3.4.0
Assigned To: Sanju
Rochelle
: Regression
Depends On:
Blocks: 1293332 1503137
  Show dependency treegraph
 
Reported: 2018-04-26 01:05 EDT by Rochelle
Modified: 2018-09-14 00:08 EDT (History)
10 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-10
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1577627 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:47:18 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:48 EDT

  None (edit)
Description Rochelle 2018-04-26 01:05:33 EDT
Description of problem:
=======================
Geo-replication status was CREATED/ACTIVE as opposed to ACTIVE/PASSIVE.

Geo-replication session was started and the following was shown as the status of the session:
----------------------------------------------------------------------------------------------
[root@dhcp41-226 scripts]# gluster volume geo-replication master 10.70.41.160::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.41.226    master        /rhs/brick3/b7    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.226    master        /rhs/brick1/b1    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.230    master        /rhs/brick2/b5    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.229    master        /rhs/brick2/b4    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.219    master        /rhs/brick2/b6    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.227    master        /rhs/brick3/b8    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.227    master        /rhs/brick1/b2    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.228    master        /rhs/brick3/b9    root          10.70.41.160::slave    10.70.41.160    Active     Changelog Crawl    2018-04-23 06:13:53          
10.70.41.228    master        /rhs/brick1/b3    root          10.70.41.160::slave    10.70.42.79     Active     Changelog Crawl    2018-04-23 06:13:53        




Version-Release number of selected component (if applicable):
============================================================
[root@dhcp41-226 ~]# rpm -qa | grep gluster
glusterfs-fuse-3.12.2-7.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-7.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-libs-3.12.2-7.el7rhgs.x86_64
glusterfs-cli-3.12.2-7.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.2.x86_64
glusterfs-rdma-3.12.2-7.el7rhgs.x86_64
glusterfs-events-3.12.2-7.el7rhgs.x86_64
glusterfs-3.12.2-7.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-7.el7rhgs.x86_64
glusterfs-server-3.12.2-7.el7rhgs.x86_64
vdsm-gluster-4.19.43-2.3.el7rhgs.noarch
python2-gluster-3.12.2-7.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
glusterfs-api-3.12.2-7.el7rhgs.x86_64


How reproducible:
=================
2/2

Steps to Reproduce:
===================
1. Create Master and a Slave cluster from 6 nodes (each)
2. Create and Start master volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
4. Create and Start slave volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
5. Enable quota on master volume 
6. Enable shared storage on master volume
7. Setup geo-rep session between master and slave volume 
8. Mount master volume on client 
9. Create data from master client

Actual results:
==============
gsyncd was down on 5 nodes out of 6
Once started, the geo-rep status was ACTIVE/created


Expected results:
=================
gsyncd should be up on all nodes
Once started, the geo-rep status should be in ACTIVE/PASSIVE


Sosreports at : http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rallan/1570586/
Comment 3 Aravinda VK 2018-04-26 01:55:56 EDT
Looks like some setup issue in some nodes.
Comment 4 Rahul Hinduja 2018-04-26 02:21:03 EDT
(In reply to Aravinda VK from comment #3)
> Looks like some setup issue in some nodes.

AFAIK, this is something only seen while tier is involved. Otherwise in the same cluster the geo-replication is working as expected.
Comment 18 errata-xmlrpc 2018-09-04 02:47:18 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.