Bug 1460918

Summary: [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rahul Hinduja <rhinduja>
Component: geo-replicationAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: Rochelle <rallan>
Severity: low Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: csaba, khiremat, rhs-bugs, sheggodu, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: rebase
Fixed In Version: glusterfs-3.12.2-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1500284 (view as bug list) Environment:
Last Closed: 2018-09-04 06:32:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1500284, 1500835, 1503134    

Description Rahul Hinduja 2017-06-13 07:14:20 UTC
Description of problem:
=======================

My understanding is that all the workers will try to acquire lock and the one which gets becomes ACTIVE, rest all will be initializing and will go to PASSIVE. However with EC (Tried once), Observed all becoming ACTIVE and than it became PASSIVE. 

[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS             CRAWL STATUS       LAST_SYNCED                  
-------------------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active             Changelog Crawl    2017-06-13 06:51:59          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
[root@dhcp37-150 scripts]# 
[root@dhcp37-150 scripts]# 
[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
[root@dhcp37-150 scripts]# 


[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
[root@dhcp37-150 scripts]# 


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-geo-replication-3.8.4-27.el7rhgs.x86_64


How reproducible:
=================

Have seen once, and will try again


Steps to Reproduce:
===================
1. Create geo-replication with EC being Master
2. Monitor the status in a loop


Additional info:
================

Unfortunately I do not have logs since the cleanup script was executed. However, raising a low priority bug to retry and upload the logs.

Comment 3 Kotresh HR 2017-10-10 12:36:13 UTC
Upstream Patch:

https://review.gluster.org/#/c/18464/ (master)

Comment 7 errata-xmlrpc 2018-09-04 06:32:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607