Bug 1279921

Summary: volume info of %s obtained from %s: ambiguous uuid - Starting geo-rep session
Product: [Community] GlusterFS Reporter: Aravinda VK <avishwan>
Component: geo-replicationAssignee: Aravinda VK <avishwan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: mainlineCC: bugs, mselvaga, tbrammer
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1279644 Environment:
Last Closed: 2016-06-16 13:44:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1279644    
Bug Blocks:    

Description Aravinda VK 2015-11-10 13:19:17 UTC
+++ This bug was initially created as a clone of Bug #1279644 +++

Description of problem:

When starting a geo-replication session with gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start The command returns as successful, but when running gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status the status remains as created, and no data is synchronized.

Version-Release number of selected component (if applicable):


How reproducible:



Steps to Reproduce:
1. gluster system:: execute gsec_create
2. gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> create push-pem
3. gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> start
4. gluster volume geo-replication <master_volume> <slave_host>::<slave_volume> status

Actual results:

Status only shows "CREATED" and /var/log/glusterfs/geo-replication/<vol>/ log shows: 

[2015-11-09 17:59:37.783683] E [syncdutils(monitor):247:log_raise_exception] <top>: volume info of %s obtained from %s: ambiguous uuid
[2015-11-09 17:59:37.784065] I [syncdutils(monitor):220:finalize] <top>: exiting.

Expected results:

geo-sync session started.

Additional info:

--- Additional comment from Aravinda VK on 2015-11-10 07:24:15 EST ---

Looks like some issue in Gluster Volume info. Please provide the output of

gluster volume info <master_volume>
gluster volume info <master_volume> --xml
gluster volume info <slave_volume> --remote-host=<slave_host>
gluster volume info <slave_volume> --remote-host=<slave_host> --xml

Comment 1 Vijay Bellur 2015-11-16 14:37:19 UTC
REVIEW: http://review.gluster.org/12566 (geo-rep: Fix syntax errors in GsyncdError) posted (#2) for review on master by Aravinda VK (avishwan)

Comment 2 Vijay Bellur 2015-11-17 05:42:05 UTC
REVIEW: http://review.gluster.org/12566 (geo-rep: Fix syntax errors in GsyncdError) posted (#3) for review on master by Aravinda VK (avishwan)

Comment 3 Vijay Bellur 2015-11-21 14:19:47 UTC
REVIEW: http://review.gluster.org/12566 (geo-rep: Fix syntax errors in GsyncdError) posted (#4) for review on master by Aravinda VK (avishwan)

Comment 4 Vijay Bellur 2015-11-23 19:23:48 UTC
COMMIT: http://review.gluster.org/12566 committed in master by Vijay Bellur (vbellur) 
------
commit 74699ddd777f7e862991cf3afad91823d30e5b84
Author: Aravinda VK <avishwan>
Date:   Tue Nov 10 18:46:28 2015 +0530

    geo-rep: Fix syntax errors in GsyncdError
    
    %s was not replaced by actual values in GsyncdError
    
    BUG: 1279921
    Change-Id: I3c0a10f07383ca72844a46f930b4aa3d3c29f568
    Signed-off-by: Aravinda VK <avishwan>
    Reviewed-on: http://review.gluster.org/12566
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 5 Niels de Vos 2016-06-16 13:44:23 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user