Bug 1119739

Summary: [Dist-geo-rep] In mount-broker setup, after geo-rep stop, status shows faulty instead of stopped.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vijaykumar Koppad <vkoppad>
Component: geo-replicationAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: Bhaskar Bandari <bbandari>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.0CC: aavati, ajha, avishwan, bbandari, csaba, david.macdonald, nlevinki, nsathyan, ssamanta
Target Milestone: ---   
Target Release: RHGS 3.0.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.6.0.25-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-09-22 19:44:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1104649    
Bug Blocks:    

Description Vijaykumar Koppad 2014-07-15 12:03:15 UTC
Description of problem: In mount-broker setup, after geo-rep stop, status shows faulty instead of stopped.  
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
[root@Willard geo-rep-auto-logs]# gluster v geo master geoaccount.43.137::slave stop
Stopping geo-replication session between master & geoaccount.43.137::slave has been successful
[root@Willard geo-rep-auto-logs]# gluster v geo master geoaccount.43.137::slave status

MASTER NODE               MASTER VOL    MASTER BRICK          SLAVE                  STATUS    CHECKPOINT STATUS    CRAWL STATUS
---------------------------------------------------------------------------------------------------------------------------------
Willard.blr.redhat.com    master        /bricks/brick1/b1     10.70.42.169::slave    faulty    N/A                  N/A
Willard.blr.redhat.com    master        /bricks/brick2/b5     10.70.42.169::slave    faulty    N/A                  N/A
Willard.blr.redhat.com    master        /bricks/brick3/b9     10.70.42.169::slave    faulty    N/A                  N/A
Morgan.blr.redhat.com     master        /bricks/brick1/b2     10.70.43.167::slave    faulty    N/A                  N/A
Morgan.blr.redhat.com     master        /bricks/brick2/b6     10.70.43.167::slave    faulty    N/A                  N/A
Morgan.blr.redhat.com     master        /bricks/brick3/b10    10.70.43.167::slave    faulty    N/A                  N/A
Normand.blr.redhat.com    master        /bricks/brick1/b4     10.70.42.250::slave    faulty    N/A                  N/A
Normand.blr.redhat.com    master        /bricks/brick2/b8     10.70.42.250::slave    faulty    N/A                  N/A
Normand.blr.redhat.com    master        /bricks/brick3/b12    10.70.42.250::slave    faulty    N/A                  N/A
Arnoldo.blr.redhat.com    master        /bricks/brick1/b3     10.70.43.137::slave    faulty    N/A                  N/A
Arnoldo.blr.redhat.com    master        /bricks/brick2/b7     10.70.43.137::slave    faulty    N/A                  N/A
Arnoldo.blr.redhat.com    master        /bricks/brick3/b11    10.70.43.137::slave    faulty    N/A                  N/A

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
 

Version-Release number of selected component (if applicable):glusterfs-3.6.0.24-1.el6rhs


How reproducible: Happens everytime. 


Steps to Reproduce:
1. create and start a geo-rep mount-broker setup between master and slave volume.
2. after it gets to stable, stop the session 
3. check geo-rep status.

Actual results: geo-rep status shows faulty after geo-rep stop in mount-broker setup.


Expected results: It should show stopped if it is stopped. 


Additional info:

Comment 2 Kotresh HR 2014-07-16 12:18:48 UTC
This is another symptom for the root cause "conf file path was not created in mount broker setup" fixed as part of the following patch

Downstream:
https://code.engineering.redhat.com/gerrit/#/c/26825/
Upstream:
http://review.gluster.org/7977

It is merged upstream and downstream. Since it is a different symptom, not marking as duplicate. Moving it to Modified.

Comment 3 Vijaykumar Koppad 2014-07-23 10:14:02 UTC
verified on the build glusterfs-3.6.0.25-1

Comment 7 errata-xmlrpc 2014-09-22 19:44:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html