Description of problem: geo-rep throws wrong error messages when incorrect commands are executed. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> # gluster v geo master 10.70.43.111::slave start state-file entry missing in the config file(/var/lib/glusterd/geo-replication/master_10.70.43.111_slave/gsyncd.conf). geo-replication command failed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Version-Release number of selected component (if applicable): glusterfs-3.6.0.22-1.el6rhs How reproducible: Happens everytime. Steps to Reproduce: 1. create master and slave volumes. 2. then run geo-rep start instead of create. 3. Actual results: Throws wrong error messages. Expected results: Commands should give right error messages. Additional info:
Verified with build: glusterfs-3.7.0-2.el6rhs.x86_64 If you try to start geo-rep without the creation of session, it now gives the desired output as: [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave start Geo-replication session between master and 10.70.46.154::slave does not exist. geo-replication command failed [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config Geo-replication session between master and 10.70.46.154::slave does not exist. geo-replication command failed [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave create Creating geo-replication session between master & 10.70.46.154::slave has been successful [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Created N/A N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Created N/A N/A georep3 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Created N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Created N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Created N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Created N/A N/A [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config special_sync_mode: partial state_socket_unencoded: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.socket gluster_log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem ignore_deletes: false change_detector: changelog gluster_command_dir: /usr/sbin/ georep_session_working_dir: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ state_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.status remote_gsyncd: /nonexistent/gsyncd changelog_log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave-changes.log socketdir: /var/run/gluster working_dir: /var/lib/misc/glusterfsd/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave state_detail_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave-detail.status ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem pid_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.pid log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log gluster_params: aux-gfid-mount acl [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave start Starting geo-replication session between master & 10.70.46.154::slave has been successful [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED -------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.101 Active History Crawl N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.101 Active History Crawl 2015-05-20 12:22:17 georep3 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A [root@georep1 scripts]# [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ---------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.101 Active Changelog Crawl 2015-05-20 12:22:17 georep1 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.101 Active Changelog Crawl 2015-05-20 12:22:17 georep3 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A [root@georep1 scripts]# Moving the bug to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html