Bug 1500284 - [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
Summary: [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the P...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1460918
Blocks: 1500835
TreeView+ depends on / blocked
 
Reported: 2017-10-10 10:37 UTC by Kotresh HR
Modified: 2017-12-08 17:43 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.13.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1460918
: 1500835 (view as bug list)
Environment:
Last Closed: 2017-12-08 17:43:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2017-10-10 10:37:56 UTC
+++ This bug was initially created as a clone of Bug #1460918 +++

Description of problem:
=======================

My understanding is that all the workers will try to acquire lock and the one which gets becomes ACTIVE, rest all will be initializing and will go to PASSIVE. However with EC (Tried once), Observed all becoming ACTIVE and than it became PASSIVE. 

[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS             CRAWL STATUS       LAST_SYNCED                  
-------------------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active             Changelog Crawl    2017-06-13 06:51:59          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    N/A             Initializing...    N/A                N/A                          
[root@dhcp37-150 scripts]# 
[root@dhcp37-150 scripts]# 
[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    10.70.37.71     Active     N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    10.70.37.181    Active     N/A                N/A                          
[root@dhcp37-150 scripts]# 


[root@dhcp37-150 scripts]# gluster volume geo-replication master 10.70.37.71::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                 SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.150    master        /rhs/brick1/b1     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.150    master        /rhs/brick2/b7     root          10.70.37.71::slave    10.70.37.181    Active     Changelog Crawl    2017-06-13 06:52:01          
10.70.37.190    master        /rhs/brick1/b6     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.190    master        /rhs/brick2/b12    root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.105    master        /rhs/brick1/b3     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.105    master        /rhs/brick2/b9     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.42     master        /rhs/brick1/b5     root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.42     master        /rhs/brick2/b11    root          10.70.37.71::slave    10.70.37.181    Passive    N/A                N/A                          
10.70.37.194    master        /rhs/brick1/b4     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.194    master        /rhs/brick2/b10    root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.171    master        /rhs/brick1/b2     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
10.70.37.171    master        /rhs/brick2/b8     root          10.70.37.71::slave    10.70.37.71     Passive    N/A                N/A                          
[root@dhcp37-150 scripts]# 


Version-Release number of selected component (if applicable):
=============================================================

mainline


How reproducible:
=================

Have seen once, and will try again


Steps to Reproduce:
===================
1. Create geo-replication with EC being Master
2. Monitor the status in a loop

Comment 1 Worker Ant 2017-10-10 10:40:17 UTC
REVIEW: https://review.gluster.org/18464 (geo-rep: Fix status transition) posted (#1) for review on master by Kotresh HR (khiremat)

Comment 2 Worker Ant 2017-10-11 10:13:39 UTC
COMMIT: https://review.gluster.org/18464 committed in master by Aravinda VK (avishwan) 
------
commit 3edf926a1bda43879c09694cf3904c214c94c9dc
Author: Kotresh HR <khiremat>
Date:   Tue Oct 10 05:54:04 2017 -0400

    geo-rep: Fix status transition
    
    The status transition is as below which is
    wrong.
    
    Created->Initializing->Active->Active/Passive->Stopped
    
    As soon as the monitor spawns the worker, the state
    is changed from 'Initializing' to 'Active' and then to
    'Active/Passive' based on whether worker gets the lock
    or not. This is wrong and it should directly tranistion
    as below.
    
    Created->Initializing->Active/Passive->Stopped
    
    Change-Id: Ibf5ca5c4fdf168c403c6da01db60b93f0604aae7
    BUG: 1500284
    Signed-off-by: Kotresh HR <khiremat>

Comment 3 Shyamsundar 2017-12-08 17:43:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.