Bug 1219937

Summary: Running status second time shows no active sessions
Product: [Community] GlusterFS Reporter: Aravinda VK <avishwan>
Component: geo-replicationAssignee: Aravinda VK <avishwan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, gluster-bugs
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1219938 (view as bug list) Environment:
Last Closed: 2016-06-16 12:59:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1219938    

Description Aravinda VK 2015-05-08 18:55:58 UTC
Description of problem:
Create and start geo-rep session.
Run status command twice. First time status shows fine, second time fails with error "No active geo-replication sessions between <MASTERVOL> and <SLAVEHOST>::<SLAVEVOL>"

Version-Release number of selected component (if applicable):


How reproducible:
Always


Steps to Reproduce:
1. Create and start geo-rep session
2. Run status command twice
3.

Actual results:
Fails with error

Expected results:
Show status output

Additional info:

Comment 1 Anand Avati 2015-05-08 18:59:05 UTC
REVIEW: http://review.gluster.org/10694 (geo-rep: Fix corrupt gsyncd output) posted (#1) for review on master by Aravinda VK (avishwan)

Comment 2 Anand Avati 2015-05-09 03:28:35 UTC
COMMIT: http://review.gluster.org/10694 committed in master by Vijay Bellur (vbellur) 
------
commit 77edf5f7f78d5a551526b0d9bef8e0c0a8f22c1c
Author: Aravinda VK <avishwan>
Date:   Sat May 9 00:26:14 2015 +0530

    geo-rep: Fix corrupt gsyncd output
    
    When gsyncd fails with Python traceback, glusterd fails
    parsing gsyncd output and shows error.
    
    BUG: 1219937
    Change-Id: Ic32fd897c49a5325294a6588351b539c6e124338
    Signed-off-by: Aravinda VK <avishwan>
    Reviewed-on: http://review.gluster.org/10694
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Anand Avati 2015-05-09 04:39:58 UTC
REVIEW: http://review.gluster.org/10698 (geo-rep: Update Not Started to Created in code and doc) posted (#1) for review on master by Aravinda VK (avishwan)

Comment 4 Anand Avati 2015-05-09 09:19:54 UTC
REVIEW: http://review.gluster.org/10698 (geo-rep: Update Not Started to Created in code and doc) posted (#2) for review on master by Vijay Bellur (vbellur)

Comment 5 Anand Avati 2015-05-09 17:31:19 UTC
REVIEW: http://review.gluster.org/10698 (geo-rep: Update Not Started to Created in code and doc) posted (#3) for review on master by Kotresh HR (khiremat)

Comment 6 Anand Avati 2015-05-10 03:24:54 UTC
COMMIT: http://review.gluster.org/10698 committed in master by Vijay Bellur (vbellur) 
------
commit deb76ef2a97ea51c0cbe329a1fdb636071fda90f
Author: Aravinda VK <avishwan>
Date:   Sat May 9 10:03:42 2015 +0530

    geo-rep: Update Not Started to Created in code and doc
    
    "Not Started" status is now "Created", replaced "Not Started"
    string in code and doc.
    
    Change-Id: If7d606c2cc8156e41291e7eebe9d0da4ad7ac28d
    Signed-off-by: Aravinda VK <avishwan>
    BUG: 1219937
    Reviewed-on: http://review.gluster.org/10698
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Kotresh HR <khiremat>

Comment 7 Aravinda VK 2015-05-18 10:47:01 UTC
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-3.7.0, please open a new bug report.

Comment 8 Niels de Vos 2016-06-16 12:59:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user