Bug 1460918 - [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
Summary: [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the P...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: RHGS 3.4.0
Assignee: Kotresh HR
QA Contact: Rochelle
URL:
Whiteboard: rebase
Depends On:
Blocks: 1500284 1500835 1503134
TreeView+ depends on / blocked
 
Reported: 2017-06-13 07:14 UTC by Rahul Hinduja
Modified: 2018-09-14 04:38 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.12.2-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1500284 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:32:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:33:53 UTC

Description Rahul Hinduja 2017-06-13 07:14:20 UTC
Description of problem:
=======================

My understanding is that all the workers will try to acquire lock and the one which gets becomes ACTIVE, rest all will be initializing and will go to PASSIVE. However with EC (Tried once), Observed all becoming ACTIVE and than it became PASSIVE. 

[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS             CRAWL STATUS       LAST_SYNCED                  
-------------------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active             Changelog Crawl    2017-06-13 06:51:59          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
[root@dhcp37-150 scripts]# 
[root@dhcp37-150 scripts]# 
[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
[root@dhcp37-150 scripts]# 


[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
[root@dhcp37-150 scripts]# 


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-geo-replication-3.8.4-27.el7rhgs.x86_64


How reproducible:
=================

Have seen once, and will try again


Steps to Reproduce:
===================
1. Create geo-replication with EC being Master
2. Monitor the status in a loop


Additional info:
================

Unfortunately I do not have logs since the cleanup script was executed. However, raising a low priority bug to retry and upload the logs.

Comment 3 Kotresh HR 2017-10-10 12:36:13 UTC
Upstream Patch:

https://review.gluster.org/#/c/18464/ (master)

Comment 7 errata-xmlrpc 2018-09-04 06:32:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.