Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1569312 - [geo-rep]: Geo-replication in FAULTY state - RHEL 6
[geo-rep]: Geo-replication in FAULTY state - RHEL 6
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity high
: ---
: RHGS 3.4.0
Assigned To: Kotresh HR
Rochelle
: Regression
Depends On:
Blocks: 1503137 1589782 1611111
  Show dependency treegraph
 
Reported: 2018-04-18 23:33 EDT by Rochelle
Modified: 2018-09-14 01:29 EDT (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-13
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1589782 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:52:37 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2608 None None None 2018-09-04 02:53 EDT

  None (edit)
Description Rochelle 2018-04-18 23:33:05 EDT
Description of problem:
=======================
Geo-replication session is in FAULTY state in RHEL6 as shown:

[root@dhcp43-133 ~]# gluster volume geo-replication master 10.70.43.202::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
---------------------------------------------------------------------------------------------------------------------------------------
10.70.43.133    master        /rhs/brick1/b1    root          10.70.43.202::slave    N/A           Faulty    N/A             N/A                  
10.70.43.133    master        /rhs/brick2/b4    root          10.70.43.202::slave    N/A           Faulty    N/A             N/A                  
10.70.43.163    master        /rhs/brick1/b2    root          10.70.43.202::slave    N/A           Faulty    N/A             N/A                  
10.70.43.163    master        /rhs/brick2/b5    root          10.70.43.202::slave    N/A           Faulty    N/A             N/A                  
10.70.41.234    master        /rhs/brick1/b3    root          10.70.43.202::slave    N/A           Faulty    N/A             N/A                  
10.70.41.234    master        /rhs/brick2/b6    root          10.70.43.202::slave    N/A           Faulty    N/A             N/A  

Version-Release number of selected component (if applicable):
=============================================================
Seen in glusterfs-3.8.4-54.4.el6rhs.x86_64

How reproducible:
=================
Always


Steps to Reproduce:
====================
1.Create master and slave volumes (3x3)
2.Create and start a geo-rep session


Actual results:
===============
Session is in FAULTY state

Expected results:
=================
The session should not be in faulty state
Comment 6 Kotresh HR 2018-06-11 07:54:48 EDT
Upstream Patch: https://review.gluster.org/#/c/20221/1
Comment 12 errata-xmlrpc 2018-09-04 02:52:37 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2608

Note You need to log in before you can comment on or make changes to this bug.