Bug 1342261 - [georep]: Stopping volume fails if it has geo-rep session (Even in stopped state)
Summary: [georep]: Stopping volume fails if it has geo-rep session (Even in stopped st...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.3
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1311817 1342420 1342431 1342634
TreeView+ depends on / blocked
 
Reported: 2016-06-02 19:28 UTC by Rahul Hinduja
Modified: 2016-06-23 05:25 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.9-8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1342420 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:25:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Rahul Hinduja 2016-06-02 19:28:53 UTC
Description of problem:
=======================

Pre-requisite of stopping volume is to stop geo-rep session. Even if geo-rep session is stopped, it fails complaining that the geo-rep session is ACTIVE.

[root@dhcp37-88 scripts]# gluster volume stop MASTER
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: MASTER: failed: geo-replication sessions are active for the volume MASTER.
Stop geo-replication sessions involved in this volume. Use 'volume geo-replication status' command for more info.
[root@dhcp37-88 scripts]# gluster volume geo-replication status
 
MASTER NODE     MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                       SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
---------------------------------------------------------------------------------------------------------------------------------------------
10.70.37.88     MASTER        /rhs/brick1/b1    root          ssh://10.70.37.52::SLAVE    N/A           Stopped    N/A             N/A                  
10.70.37.88     MASTER        /rhs/brick2/b4    root          ssh://10.70.37.52::SLAVE    N/A           Stopped    N/A             N/A                  
10.70.37.213    MASTER        /rhs/brick1/b3    root          ssh://10.70.37.52::SLAVE    N/A           Stopped    N/A             N/A                  
10.70.37.213    MASTER        /rhs/brick2/b6    root          ssh://10.70.37.52::SLAVE    N/A           Stopped    N/A             N/A                  
10.70.37.43     MASTER        /rhs/brick1/b2    root          ssh://10.70.37.52::SLAVE    N/A           Stopped    N/A             N/A                  
10.70.37.43     MASTER        /rhs/brick2/b5    root          ssh://10.70.37.52::SLAVE    N/A           Stopped    N/A             N/A                  
[root@dhcp37-88 scripts]# gluster volume list
MASTER
gluster_shared_storage
[root@dhcp37-88 scripts]# 



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.9-7


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Create geo-rep session between master and slave
2. Stop geo-rep session
3. Stop volume

Actual results:
===============

It fails complaining that the geo-rep session is active. 

Expected results:
=================
Volume stop should succeed

Comment 3 Kotresh HR 2016-06-03 09:10:39 UTC
Upstream Patch:
http://review.gluster.org/#/c/14636/1 (master)

Comment 8 Rahul Hinduja 2016-06-04 07:11:05 UTC
Verified with build: glusterfs-3.7.9-8

Stopping volume when geo-replication is in stopped state is successful. Moving this BZ to verified state.

Comment 11 errata-xmlrpc 2016-06-23 05:25:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.