Bug 1113424
| Summary: | Dist-geo-rep : geo-rep throws wrong error messages when incorrect commands are executed. | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Vijaykumar Koppad <vkoppad> | |
| Component: | geo-replication | Assignee: | Avra Sengupta <asengupt> | |
| Status: | CLOSED ERRATA | QA Contact: | storage-qa-internal <storage-qa-internal> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | rhgs-3.0 | CC: | aavati, avishwan, csaba, david.macdonald, nlevinki, rhinduja | |
| Target Milestone: | --- | |||
| Target Release: | RHGS 3.1.0 | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | usability | |||
| Fixed In Version: | glusterfs-3.7.0-2.el6rhs | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1114469 (view as bug list) | Environment: | ||
| Last Closed: | 2015-07-29 04:32:50 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1114469, 1202842, 1223636 | |||
|
Description
Vijaykumar Koppad
2014-06-26 07:41:22 UTC
Verified with build: glusterfs-3.7.0-2.el6rhs.x86_64 If you try to start geo-rep without the creation of session, it now gives the desired output as: [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave start Geo-replication session between master and 10.70.46.154::slave does not exist. geo-replication command failed [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config Geo-replication session between master and 10.70.46.154::slave does not exist. geo-replication command failed [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave create Creating geo-replication session between master & 10.70.46.154::slave has been successful [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Created N/A N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Created N/A N/A georep3 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Created N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Created N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Created N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Created N/A N/A [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config special_sync_mode: partial state_socket_unencoded: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.socket gluster_log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem ignore_deletes: false change_detector: changelog gluster_command_dir: /usr/sbin/ georep_session_working_dir: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ state_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.status remote_gsyncd: /nonexistent/gsyncd changelog_log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave-changes.log socketdir: /var/run/gluster working_dir: /var/lib/misc/glusterfsd/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave state_detail_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave-detail.status ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem pid_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.pid log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log gluster_params: aux-gfid-mount acl [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave start Starting geo-replication session between master & 10.70.46.154::slave has been successful [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED -------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.101 Active History Crawl N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.101 Active History Crawl 2015-05-20 12:22:17 georep3 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A [root@georep1 scripts]# [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ---------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.101 Active Changelog Crawl 2015-05-20 12:22:17 georep1 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.101 Active Changelog Crawl 2015-05-20 12:22:17 georep3 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A [root@georep1 scripts]# Moving the bug to verified state Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |