Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1234419 - [geo-rep]: Feature fan-out fails with the use of meta volume config
[geo-rep]: Feature fan-out fails with the use of meta volume config
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
3.1
x86_64 Linux
high Severity urgent
: ---
: RHGS 3.1.0
Assigned To: Kotresh HR
Rahul Hinduja
: Regression
Depends On:
Blocks: 1202842 1234882 1234898
  Show dependency treegraph
 
Reported: 2015-06-22 10:01 EDT by Rahul Hinduja
Modified: 2015-07-29 01:06 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1234882 (view as bug list)
Environment:
Last Closed: 2015-07-29 01:06:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Rahul Hinduja 2015-06-22 10:01:28 EDT
Description of problem:
=======================

When the geo-rep session was created between 2 slaves, one slaves bricks all becomes PASSIVE. It is only with the use of meta volume config set to true. 

Slave volumes: slave1 and slave2


Creating geo-rep Session between master volume and slave volumes (slave1,slave2)

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 create push-pem force
Creating geo-replication session between master & 10.70.46.154::slave1 has been successful
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 create push-pem force
Creating geo-replication session between master & 10.70.46.154::slave2 has been successful
[root@georep1 scripts]# 

Setting the use-meta-volume for slave1 and slave2 volume:

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 config use_meta_volume true
geo-replication config updated successfully
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 config use_meta_volume true
geo-replication config updated successfully
[root@georep1 scripts]# 


Starting geo-rep session for slave volumes slave1, slave2

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 start
Starting geo-replication session between master & 10.70.46.154::slave1 has been successful
[root@georep1 scripts]#
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 start
Starting geo-replication session between master & 10.70.46.154::slave2 has been successful
[root@georep1 scripts]# 

Status:
=======
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                   SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave1    10.70.46.101    Active     Changelog Crawl    2015-06-23 00:46:12          
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave1    10.70.46.101    Active     Changelog Crawl    2015-06-23 00:46:12          
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave1    10.70.46.154    Passive    N/A                N/A                          
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave1    10.70.46.154    Passive    N/A                N/A                          
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave1    10.70.46.103    Passive    N/A                N/A                          
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave1    10.70.46.103    Passive    N/A                N/A                          
[root@georep1 scripts]# 
[root@georep1 scripts]# 
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                   SLAVE NODE      STATUS     CRAWL STATUS    LAST_SYNCED          
------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave2    10.70.46.101    Passive    N/A             N/A                  
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave2    10.70.46.101    Passive    N/A             N/A                  
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave2    10.70.46.154    Passive    N/A             N/A                  
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave2    10.70.46.154    Passive    N/A             N/A                  
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave2    10.70.46.103    Passive    N/A             N/A                  
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave2    10.70.46.103    Passive    N/A             N/A                  
[root@georep1 scripts]# 


The second slave volume slave2 has all the passive bricks, and hence the sync never happens to the slave2 volume.

Meta volume bricks:

[root@georep1 scripts]# ls /var/run/gluster/ss_brick/geo-rep/
6f023fd5-49a5-4af7-a68a-b7071a8b9ff0_subvol_1.lock  6f023fd5-49a5-4af7-a68a-b7071a8b9ff0_subvol_2.lock
[root@georep1 scripts]# 



Version-Release number of selected component (if applicable):
==============================================================

glusterfs-3.7.1-4.el6rhs.x86_64

How reproducible:
=================
1/1
Comment 5 Kotresh HR 2015-06-23 07:16:45 EDT
I think the description says, its fanout but bug title says cascading ?
Comment 7 Kotresh HR 2015-06-23 09:23:42 EDT
Upstream Patch (master):
http://review.gluster.org/#/c/11367/

Upstream Patch (3.7):
http://review.gluster.org/#/c/11366/
Comment 8 Kotresh HR 2015-06-25 05:24:37 EDT
Downstream Patch:
https://code.engineering.redhat.com/gerrit/#/c/51566/
Comment 9 Rahul Hinduja 2015-07-04 07:46:21 EDT
Verified with build: glusterfs-3.7.1-7.el6rhs.x86_64

Bricks in both the slaves becomes active. i.e, for all the subvolumes their is 1 active brick.

Moving the bug to verified state.
Comment 11 errata-xmlrpc 2015-07-29 01:06:35 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.