Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1113424 - Dist-geo-rep : geo-rep throws wrong error messages when incorrect commands are executed.
Dist-geo-rep : geo-rep throws wrong error messages when incorrect commands ar...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
3.0
x86_64 Linux
medium Severity medium
: ---
: RHGS 3.1.0
Assigned To: Avra Sengupta
storage-qa-internal@redhat.com
usability
:
Depends On:
Blocks: 1114469 1202842 1223636
  Show dependency treegraph
 
Reported: 2014-06-26 03:41 EDT by Vijaykumar Koppad
Modified: 2015-07-29 00:32 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.7.0-2.el6rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1114469 (view as bug list)
Environment:
Last Closed: 2015-07-29 00:32:50 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 04:26:26 EDT

  None (edit)
Description Vijaykumar Koppad 2014-06-26 03:41:22 EDT
Description of problem:  geo-rep throws wrong error messages when incorrect commands are executed.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# gluster v geo master 10.70.43.111::slave start
state-file entry missing in the config file(/var/lib/glusterd/geo-replication/master_10.70.43.111_slave/gsyncd.conf).
geo-replication command failed

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Version-Release number of selected component (if applicable): glusterfs-3.6.0.22-1.el6rhs


How reproducible: Happens everytime.


Steps to Reproduce:
1. create master and slave volumes.
2. then run geo-rep start instead of create. 
3.

Actual results: Throws wrong error messages.


Expected results: Commands should give right error messages. 


Additional info:
Comment 7 Rahul Hinduja 2015-05-20 03:00:35 EDT
Verified with build: glusterfs-3.7.0-2.el6rhs.x86_64

If you try to start geo-rep without the creation of session, it now gives the desired output as:

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave start
Geo-replication session between master and 10.70.46.154::slave does not exist.
geo-replication command failed
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config
Geo-replication session between master and 10.70.46.154::slave does not exist.
geo-replication command failed
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave create
Creating geo-replication session between master & 10.70.46.154::slave has been successful
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
---------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Created    N/A             N/A                  
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Created    N/A             N/A                  
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Created    N/A             N/A                  
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Created    N/A             N/A                  
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Created    N/A             N/A                  
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Created    N/A             N/A                  
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config
special_sync_mode: partial
state_socket_unencoded: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.socket
gluster_log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
ignore_deletes: false
change_detector: changelog
gluster_command_dir: /usr/sbin/
georep_session_working_dir: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/
state_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.status
remote_gsyncd: /nonexistent/gsyncd
changelog_log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave-changes.log
socketdir: /var/run/gluster
working_dir: /var/lib/misc/glusterfsd/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave
state_detail_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave-detail.status
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
pid_file: /var/lib/glusterd/geo-replication/master_10.70.46.154_slave/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.pid
log_file: /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.46.154%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log
gluster_params: aux-gfid-mount acl
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave start
Starting geo-replication session between master & 10.70.46.154::slave has been successful
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS     LAST_SYNCED                  
--------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.101    Active     History Crawl    N/A                          
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.101    Active     History Crawl    2015-05-20 12:22:17          
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.154    Passive    N/A              N/A                          
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.154    Passive    N/A              N/A                          
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.103    Passive    N/A              N/A                          
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.103    Passive    N/A              N/A                          
[root@georep1 scripts]# 
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
----------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.101    Active     Changelog Crawl    2015-05-20 12:22:17          
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.101    Active     Changelog Crawl    2015-05-20 12:22:17          
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.154    Passive    N/A                N/A                          
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.154    Passive    N/A                N/A                          
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                          
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                          
[root@georep1 scripts]# 

Moving the bug to verified state
Comment 9 errata-xmlrpc 2015-07-29 00:32:50 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html

Note You need to log in before you can comment on or make changes to this bug.