Bug 1032172 - Erroneous report of success report starting session
Erroneous report of success report starting session
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
3.4.1
x86_64 Linux
medium Severity low
: ---
: ---
Assigned To: bugs@gluster.org
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-19 11:25 EST by David Peacock
Modified: 2015-10-07 09:16 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-07 09:16:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description David Peacock 2013-11-19 11:25:44 EST
Description of problem:

When starting a session which is known faulty, the start command returns success.

Version-Release number of selected component (if applicable):

3.4.1

How reproducible:

Always

Steps to Reproduce:
1. Deliberately misconfigure ssh to slave
2. `gluster volume geo-replication gv0 peacock@192.168.1.136:/glusterslave config`
3. `gluster volume geo-replication gv0 peacock@192.168.1.136:/glusterslave start`
4. Observe success response a la `Starting geo-replication session between gv0 & peacock@192.168.1.136:/glusterslave has been successful`
5. Check log to see SSH errors and faulty status

Actual results:

[root@localhost geo-replication]# cat  /var/log/glusterfs/geo-replication/gv0/ssh%3A%2F%2Fpeacock%40192.168.1.136%3Afile%3A%2F%2F%2Fglusterslave.log
[2013-11-19 10:05:07.310777] I [monitor(monitor):21:set_state] Monitor: new state: starting...
[2013-11-19 10:05:07.330323] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------
[2013-11-19 10:05:07.330555] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
[2013-11-19 10:05:07.456104] I [gsyncd:404:main_i] <top>: syncing: gluster://localhost:gv0 -> ssh://peacock@192.168.1.136:/glusterslave
[2013-11-19 10:05:07.503731] E [syncdutils:174:log_raise_exception] <top>: execution of "ssh" failed with EACCES (Permission denied)
[2013-11-19 10:05:07.503969] I [syncdutils:148:finalize] <top>: exiting.
[2013-11-19 10:05:08.506489] I [monitor(monitor):21:set_state] Monitor: new state: faulty
[2013-11-19 10:05:18.554353] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------
[2013-11-19 10:05:18.557637] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
[2013-11-19 10:05:18.727061] I [gsyncd:404:main_i] <top>: syncing: gluster://localhost:gv0 -> ssh://peacock@192.168.1.136:/glusterslave
[2013-11-19 10:05:18.806447] E [syncdutils:174:log_raise_exception] <top>: execution of "ssh" failed with EACCES (Permission denied)
[2013-11-19 10:05:18.810677] I [syncdutils:148:finalize] <top>: exiting.
[2013-11-19 10:05:29.824590] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------
[2013-11-19 10:05:29.824863] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
[2013-11-19 10:05:30.28832] I [gsyncd:404:main_i] <top>: syncing: gluster://localhost:gv0 -> ssh://peacock@192.168.1.136:/glusterslave
[2013-11-19 10:05:30.164731] E [syncdutils:174:log_raise_exception] <top>: execution of "ssh" failed with EACCES (Permission denied)
[2013-11-19 10:05:30.165200] I [syncdutils:148:finalize] <top>: exiting.

[root@localhost geo-replication]# gluster volume geo-replication gv0 peacock@192.168.1.136:/glusterslave status
NODE                 MASTER               SLAVE                                              STATUS    
---------------------------------------------------------------------------------------------------
localhost.localdomain gv0                  peacock@192.168.1.136:/glusterslave                faulty

Expected results:

Report of failure during startup, preferably with pointer to cause or log.

Additional info:
Comment 1 Niels de Vos 2015-05-17 17:58:32 EDT
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs@gluster.org".

If there is no response by the end of the month, this bug will get automatically closed.
Comment 2 Kaleb KEITHLEY 2015-10-07 09:16:19 EDT
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Note You need to log in before you can comment on or make changes to this bug.