Bug 1032172 - Erroneous report of success report starting session
Summary: Erroneous report of success report starting session
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.4.1
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-19 16:25 UTC by David Peacock
Modified: 2015-10-07 13:16 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-10-07 13:16:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description David Peacock 2013-11-19 16:25:44 UTC
Description of problem:

When starting a session which is known faulty, the start command returns success.

Version-Release number of selected component (if applicable):

3.4.1

How reproducible:

Always

Steps to Reproduce:
1. Deliberately misconfigure ssh to slave
2. `gluster volume geo-replication gv0 peacock.1.136:/glusterslave config`
3. `gluster volume geo-replication gv0 peacock.1.136:/glusterslave start`
4. Observe success response a la `Starting geo-replication session between gv0 & peacock.1.136:/glusterslave has been successful`
5. Check log to see SSH errors and faulty status

Actual results:

[root@localhost geo-replication]# cat  /var/log/glusterfs/geo-replication/gv0/ssh%3A%2F%2Fpeacock%40192.168.1.136%3Afile%3A%2F%2F%2Fglusterslave.log
[2013-11-19 10:05:07.310777] I [monitor(monitor):21:set_state] Monitor: new state: starting...
[2013-11-19 10:05:07.330323] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------
[2013-11-19 10:05:07.330555] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
[2013-11-19 10:05:07.456104] I [gsyncd:404:main_i] <top>: syncing: gluster://localhost:gv0 -> ssh://peacock.1.136:/glusterslave
[2013-11-19 10:05:07.503731] E [syncdutils:174:log_raise_exception] <top>: execution of "ssh" failed with EACCES (Permission denied)
[2013-11-19 10:05:07.503969] I [syncdutils:148:finalize] <top>: exiting.
[2013-11-19 10:05:08.506489] I [monitor(monitor):21:set_state] Monitor: new state: faulty
[2013-11-19 10:05:18.554353] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------
[2013-11-19 10:05:18.557637] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
[2013-11-19 10:05:18.727061] I [gsyncd:404:main_i] <top>: syncing: gluster://localhost:gv0 -> ssh://peacock.1.136:/glusterslave
[2013-11-19 10:05:18.806447] E [syncdutils:174:log_raise_exception] <top>: execution of "ssh" failed with EACCES (Permission denied)
[2013-11-19 10:05:18.810677] I [syncdutils:148:finalize] <top>: exiting.
[2013-11-19 10:05:29.824590] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------
[2013-11-19 10:05:29.824863] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
[2013-11-19 10:05:30.28832] I [gsyncd:404:main_i] <top>: syncing: gluster://localhost:gv0 -> ssh://peacock.1.136:/glusterslave
[2013-11-19 10:05:30.164731] E [syncdutils:174:log_raise_exception] <top>: execution of "ssh" failed with EACCES (Permission denied)
[2013-11-19 10:05:30.165200] I [syncdutils:148:finalize] <top>: exiting.

[root@localhost geo-replication]# gluster volume geo-replication gv0 peacock.1.136:/glusterslave status
NODE                 MASTER               SLAVE                                              STATUS    
---------------------------------------------------------------------------------------------------
localhost.localdomain gv0                  peacock.1.136:/glusterslave                faulty

Expected results:

Report of failure during startup, preferably with pointer to cause or log.

Additional info:

Comment 1 Niels de Vos 2015-05-17 21:58:32 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 2 Kaleb KEITHLEY 2015-10-07 13:16:19 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.