Bug 1121716 - [EARLY ACCESS] config output shows remote_gsyncd: /nonexistent/gsyncd
Summary: [EARLY ACCESS] config output shows remote_gsyncd: /nonexistent/gsyncd
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: config
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-21 17:08 UTC by Jacob Shucart
Modified: 2018-04-16 15:57 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-16 15:57:50 UTC


Attachments (Terms of Use)

Description Jacob Shucart 2014-07-21 17:08:50 UTC
Description of problem in RHS 3.0 Early Access release:

Run the config command on an active geo-replication session and it shows:

remote_gsyncd: /nonexistent/gsyncd

How reproducible:

Easy

Steps to Reproduce:
1. Start geo-replication
2. Run the geo-replication master slave config
3. Look at the output

Actual results:

It shows:
remote_gsyncd: /nonexistent/gsyncd


Expected results:

It should show the actual path.

Additional info:

Comment 2 Jacob Shucart 2014-08-06 16:16:37 UTC
I see where /nonexistent/gsyncd is set.  It is basically part of some template file, right?  Can we just change the template file and replace /nonexistent/gsyncd with the real path that way this never comes up?  The problem here is that a customer is running in to this due to issues with running the create command for geo-replication.  They have a weird security software layer that puts some things in non-standard locations.

Comment 3 Aravinda VK 2015-12-28 06:15:18 UTC
Fix to be done in $SRC/geo-replication/syncdaemon/configinterface.py.in

Handle this config key as special case.

Comment 7 Aravinda VK 2018-02-06 08:08:47 UTC
Geo-replication support added to Glusterd2 project, which will be available with Gluster upstream 4.0 and 4.1 releases. 

Most of the issues already fixed with issue https://github.com/gluster/glusterd2/issues/271 and remaining fixes are noted in issue https://github.com/gluster/glusterd2/issues/557

We can close these issues since we are not planning any fixes for 3.x series.


Note You need to log in before you can comment on or make changes to this bug.