Bug 1342938 - [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
Summary: [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.3
Assignee: Saravanakumar
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1311817 1342979 1344605 1344607
TreeView+ depends on / blocked
 
Reported: 2016-06-06 07:41 UTC by Rahul Hinduja
Modified: 2016-06-23 05:26 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.9-9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1342979 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:26:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Rahul Hinduja 2016-06-06 07:41:04 UTC
Description of problem:
=======================

Known ways to add a brick from new node is to by following steps:

1. gsec create
2. create push-pem force

But with the recent validation check, the create push-pem fails complaining that the geo-rep session exists. And hence gsyncd on new node doesn't start.

If the master volume and slave volume remains same, and also if user and host remains same, then the force should not fail.

[root@dhcp37-88 ~]# gluster system:: execute gsec_create 
Common secret pub file present at /var/lib/glusterd/geo-replication/common_secret.pem.pub
[root@dhcp37-88 ~]# #gluster volume geo-replication master_nr rahul.37.52::slave_nr create push-pem
[root@dhcp37-88 ~]# gluster volume geo-replication status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                       SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
---------------------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.88    vol24         /rhs/brick1/b1    root          ssh://10.70.37.52::vol25    10.70.37.190    Active     Changelog Crawl    2016-06-05 07:31:49          
10.70.37.88    vol24         /rhs/brick2/b3    root          ssh://10.70.37.52::vol25    10.70.37.190    Active     Changelog Crawl    2016-06-05 07:31:49          
10.70.37.43    vol24         /rhs/brick1/b2    root          ssh://10.70.37.52::vol25    10.70.37.52     Passive    N/A                N/A                          
10.70.37.43    vol24         /rhs/brick2/b4    root          ssh://10.70.37.52::vol25    10.70.37.52     Passive    N/A                N/A                          
[root@dhcp37-88 ~]# 
[root@dhcp37-88 ~]# gluster volume geo-replication vol24 10.70.37.52::vol25 create push-pem force
Geo -replication session between vol24 and 10.70.37.52::vol25 is still active. Please stop the session and retry.
geo-replication command failed
[root@dhcp37-88 ~]#



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.9-8


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Have existing geo-rep session
2. Add new node on Master cluster
3. gsec_create
4. create push-pem force with same master,slave,user and hostname

Actual results:
===============

create push-pem fails and gsync doesn't get started

Comment 4 Atin Mukherjee 2016-06-06 10:43:35 UTC
Upstream mainline patch http://review.gluster.org/14653 posted for review

Comment 7 Aravinda VK 2016-06-07 13:10:28 UTC
Downstream patch: https://code.engineering.redhat.com/gerrit/#/c/76070

Comment 9 Rahul Hinduja 2016-06-08 17:05:49 UTC
Verified with build: glusterfs-3.7.9-9

Add brick use case, i.e, create push-pem force on existing geo-rep works. Moving this bug to verified state.

Comment 12 errata-xmlrpc 2016-06-23 05:26:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.