Bug 1577627 - [Geo-rep]: Status in ACTIVE/Created state
Summary: [Geo-rep]: Status in ACTIVE/Created state
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Sanju
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1293332
TreeView+ depends on / blocked
 
Reported: 2018-05-13 15:38 UTC by Sanju
Modified: 2020-01-09 17:44 UTC (History)
11 users (show)

Fixed In Version: glusterfs-5.0
Clone Of: 1572043
Environment:
Last Closed: 2018-10-23 15:08:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Worker Ant 2018-05-13 15:53:05 UTC
REVIEW: https://review.gluster.org/20015 (cli: Fix for gluster volume info --xml) posted (#1) for review on master by Sanju Rakonde

Comment 2 Kotresh HR 2018-05-14 05:17:22 UTC
Description of problem:
=======================
Geo-replication status was CREATED/ACTIVE as opposed to ACTIVE/PASSIVE.

Geo-replication session was started and the following was shown as the status of the session:
----------------------------------------------------------------------------------------------
[root@dhcp41-226 scripts]# gluster volume geo-replication master 10.70.41.160::slave status
 
MASTER NODE     MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.41.226    master        /rhs/brick3/b7    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.226    master        /rhs/brick1/b1    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.230    master        /rhs/brick2/b5    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.229    master        /rhs/brick2/b4    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.219    master        /rhs/brick2/b6    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.227    master        /rhs/brick3/b8    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.227    master        /rhs/brick1/b2    root          10.70.41.160::slave    N/A             Created    N/A                N/A                          
10.70.41.228    master        /rhs/brick3/b9    root          10.70.41.160::slave    10.70.41.160    Active     Changelog Crawl    2018-04-23 06:13:53          
10.70.41.228    master        /rhs/brick1/b3    root          10.70.41.160::slave    10.70.42.79     Active     Changelog Crawl    2018-04-23 06:13:53        




Version-Release number of selected component (if applicable):
============================================================



How reproducible:
=================
2/2

Steps to Reproduce:
===================
1. Create Master and a Slave cluster from 6 nodes (each)
2. Create and Start master volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
4. Create and Start slave volume (Tiered: cold-tier 1x(4+2)  and hot-tier 1x3)
5. Enable quota on master volume 
6. Enable shared storage on master volume
7. Setup geo-rep session between master and slave volume 
8. Mount master volume on client 
9. Create data from master client

Actual results:
==============
gsyncd was down on 5 nodes out of 6
Once started, the geo-rep status was ACTIVE/created


Expected results:
=================
gsyncd should be up on all nodes
Once started, the geo-rep status should be in ACTIVE/PASSIVE

Comment 3 Kotresh HR 2018-05-14 05:17:48 UTC
Root Cause:

The root cause appeared to be in glusterd. The glusterd command [1] is returned hostUUID as same for all the bricks. Geo-rep is just gettting affected by this. I believe this existing from previous versions as well. Anyone one from glusterd should confirm why is this behaviour with tiered volume.

If we observer below the nodes UUID is a9c6f6fd-3935-4ddd-ab93-506e2813c740, but xml output is showing 20981fb-cba6-46b5-ab24-32af8a62728f

This was from the same IP: 10.70.41.226

[root@dhcp41-226 tmp]# cat /var/lib/glusterd/glusterd.info
UUID=a9c6f6fd-3935-4ddd-ab93-506e2813c740


From gluster volume info --xml:
----------------------------------------------
<colddisperseCount>6</colddisperseCount>
            <numberOfBricks> 1 x (4 + 2) = 6</numberOfBricks>
            <brick uuid="020981fb-cba6-46b5-ab24-32af8a62728f">10.70.41.226:/rhs/brick1/b1<name>10.70.41.226:/rhs/brick1/b1</name><hostUuid>020981fb-cba6-46b5-ab24-32af8a62728f</hostUuid><isArbiter>0</isArbiter></brick>


[1] gluster volume info <tiered-volume> --xml

Comment 4 Worker Ant 2018-05-15 05:19:13 UTC
COMMIT: https://review.gluster.org/20015 committed in master by "Atin Mukherjee" <amukherj> with a commit message- cli: Fix for gluster volume info --xml

Problem: gluster volume info --xml is showing same uuid to all
the bricks of a tier volume.

Solution: While iterating over hot/cold bricks of a tier volume, use
correct iterator.

Fixes: bz#1577627
Change-Id: Icf6a9c2a10b9da453abc262a57b7884d6638e3ed
Signed-off-by: Sanju Rakonde <srakonde>

Comment 5 Shyamsundar 2018-10-23 15:08:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.