Bug 1119739 - [Dist-geo-rep] In mount-broker setup, after geo-rep stop, status shows faulty instead of stopped.
Summary: [Dist-geo-rep] In mount-broker setup, after geo-rep stop, status shows faulty...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Kotresh HR
QA Contact: Bhaskar Bandari
URL:
Whiteboard:
Depends On: 1104649
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-15 12:03 UTC by Vijaykumar Koppad
Modified: 2015-05-13 16:57 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.6.0.25-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-22 19:44:22 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Vijaykumar Koppad 2014-07-15 12:03:15 UTC
Description of problem: In mount-broker setup, after geo-rep stop, status shows faulty instead of stopped.  
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
[root@Willard geo-rep-auto-logs]# gluster v geo master geoaccount.43.137::slave stop
Stopping geo-replication session between master & geoaccount.43.137::slave has been successful
[root@Willard geo-rep-auto-logs]# gluster v geo master geoaccount.43.137::slave status

MASTER NODE               MASTER VOL    MASTER BRICK          SLAVE                  STATUS    CHECKPOINT STATUS    CRAWL STATUS
---------------------------------------------------------------------------------------------------------------------------------
Willard.blr.redhat.com    master        /bricks/brick1/b1     10.70.42.169::slave    faulty    N/A                  N/A
Willard.blr.redhat.com    master        /bricks/brick2/b5     10.70.42.169::slave    faulty    N/A                  N/A
Willard.blr.redhat.com    master        /bricks/brick3/b9     10.70.42.169::slave    faulty    N/A                  N/A
Morgan.blr.redhat.com     master        /bricks/brick1/b2     10.70.43.167::slave    faulty    N/A                  N/A
Morgan.blr.redhat.com     master        /bricks/brick2/b6     10.70.43.167::slave    faulty    N/A                  N/A
Morgan.blr.redhat.com     master        /bricks/brick3/b10    10.70.43.167::slave    faulty    N/A                  N/A
Normand.blr.redhat.com    master        /bricks/brick1/b4     10.70.42.250::slave    faulty    N/A                  N/A
Normand.blr.redhat.com    master        /bricks/brick2/b8     10.70.42.250::slave    faulty    N/A                  N/A
Normand.blr.redhat.com    master        /bricks/brick3/b12    10.70.42.250::slave    faulty    N/A                  N/A
Arnoldo.blr.redhat.com    master        /bricks/brick1/b3     10.70.43.137::slave    faulty    N/A                  N/A
Arnoldo.blr.redhat.com    master        /bricks/brick2/b7     10.70.43.137::slave    faulty    N/A                  N/A
Arnoldo.blr.redhat.com    master        /bricks/brick3/b11    10.70.43.137::slave    faulty    N/A                  N/A

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
 

Version-Release number of selected component (if applicable):glusterfs-3.6.0.24-1.el6rhs


How reproducible: Happens everytime. 


Steps to Reproduce:
1. create and start a geo-rep mount-broker setup between master and slave volume.
2. after it gets to stable, stop the session 
3. check geo-rep status.

Actual results: geo-rep status shows faulty after geo-rep stop in mount-broker setup.


Expected results: It should show stopped if it is stopped. 


Additional info:

Comment 2 Kotresh HR 2014-07-16 12:18:48 UTC
This is another symptom for the root cause "conf file path was not created in mount broker setup" fixed as part of the following patch

Downstream:
https://code.engineering.redhat.com/gerrit/#/c/26825/
Upstream:
http://review.gluster.org/7977

It is merged upstream and downstream. Since it is a different symptom, not marking as duplicate. Moving it to Modified.

Comment 3 Vijaykumar Koppad 2014-07-23 10:14:02 UTC
verified on the build glusterfs-3.6.0.25-1

Comment 7 errata-xmlrpc 2014-09-22 19:44:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.