Description of problem:
Currently if we manually check the geo-rep status or stop it with "invalid slave host, or slave volume". It throws right warning as:
[root@dhcp42-79 MASTER]# gluster volume geo-replication MASTER 10.70.41.209::SLAV status
No active geo-replication sessions between MASTER and 10.70.41.209::SLAV
[root@dhcp42-79 MASTER]# gluster volume geo-replication MASTER 10.70.41.209::SLAV stop
Geo-replication session between MASTER and 10.70.41.209::SLAV does not exist.
geo-replication command failed
But if schedule_georep script is passed with invalid slave host and volume information it fails with "commit failed on localhost" as:
[root@dhcp42-79 MASTER]# time python /usr/share/glusterfs/scripts/schedule_georep.py MASTER 10.70.41.29 SLAVE
Commit failed on localhost. Please check the log file for more details.
The problem with above output is it doesnt give picture whether something is down at slave (gsyncd, slave volume) or wrong slave information is provided. Also, which logs should user look into?
If geo-replication stop/status has failed, it should print the similar messages as it prints when executed manually.
Version-Release number of selected component (if applicable):
REVIEW: https://review.gluster.org/18442 (geo-rep/scheduler: Add validation for session existence) posted (#1) for review on master by Kotresh HR (email@example.com)
COMMIT: https://review.gluster.org/18442 committed in master by Kotresh HR (firstname.lastname@example.org)
Author: Kotresh HR <email@example.com>
Date: Fri Oct 6 05:33:31 2017 -0400
geo-rep/scheduler: Add validation for session existence
Added validation to check for session existence
to give out proper error message out.
Signed-off-by: Kotresh HR <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.
glusterfs-3.13.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.