Bug 1480907 - Geo-rep help looks to have a typo.
Summary: Geo-rep help looks to have a typo.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.2
Hardware: x86_64
OS: All
low
low
Target Milestone: ---
: RHGS 3.5.0
Assignee: Shwetha K Acharya
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1652887 1696807
TreeView+ depends on / blocked
 
Reported: 2017-08-12 20:15 UTC by Ben Turner
Modified: 2019-10-30 12:20 UTC (History)
9 users (show)

Fixed In Version: glusterfs-6.0-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1652887 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:19:37 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:20:11 UTC

Description Ben Turner 2017-08-12 20:15:57 UTC
Description of problem:

When I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication help
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

I see:

[force]|config|status
And:

[detail]|delete

I think this should read:

|config
[detail]|status
[force]|delete

For example when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol config
special_sync_mode: partial
gluster_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
change_detector: changelog
use_meta_volume: true
session_owner: 71be0011-6af3-4250-8028-65eb6563d820
state_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.status
gluster_params: aux-gfid-mount acl
remote_gsyncd: /nonexistent/gsyncd
working_dir: /var/lib/misc/glusterfsd/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol
state_detail_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-detail.status
gluster_command_dir: /usr/sbin/
pid_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.pid
georep_session_working_dir: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
master.stime_xattr_name: trusted.glusterfs.71be0011-6af3-4250-8028-65eb6563d820.91505f86-9440-47e1-a2d0-8fb817778f71.stime
changelog_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-changes.log
socketdir: /var/run/gluster
volume_id: 71be0011-6af3-4250-8028-65eb6563d820
ignore_deletes: false
state_socket_unencoded: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.socket
log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.log

It is successful.  But when I try detail I get:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol config detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

Also when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol status
 
MASTER NODE     MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE                                             SLAVE NODE                            STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A                  
192.168.50.6    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.2    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.3    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.5    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.4    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A                  

As well as:

[root@dell-per730-01-priv ~]# gluster v geo-replication data 
root.lab.eng.bos.redhat.com::georep-vol status detail
 
MASTER NODE     MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE                                             SLAVE NODE                            STATUS     CRAWL STATUS    LAST_SYNCED    ENTRY    DATA    META    FAILURES    CHECKPOINT TIME    CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A            0        431     0       0           N/A                N/A                     N/A                          
192.168.50.5    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.6    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.4    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A            0        792     0       0           N/A                N/A                     N/A                          
192.168.50.2    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.3    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A   

And when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol delete detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

It fails.  I didn't want to delete my session so I didn't run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol delete force

But I feel force is applicable here and not detail.

Version-Release number of selected component (if applicable):

[root@dell-per730-01-priv ~]# rpm -q glusterfs
glusterfs-3.8.4-18.4.el7rhgs.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Run gluster v geo-rep help
2.  Look at the config / status / delete and weather force / detail apply

Actual results:

I think that there is a typo I listed above.

Expected results:

Proper usage.

Additional info:

Comment 5 Shwetha K Acharya 2018-11-19 14:08:23 UTC
With little formatting, we could see that geo replication help outputs something like below:

# gluster v geo-replication help
Usage:

volume geo-replication [<VOLNAME>] [<SLAVE-URL>] 
{
	create [[ssh-port n] [[no-verify]|[push-pem]]] [force]
        |start [force]
        |stop [force]
        |pause [force]
        |resume [force]
        |config
        |status [detail]
        |delete [reset-sync-time]
} [options...]
#

It looks like the format of the help command is interpreted wrongly and used.

Comment 6 Shwetha K Acharya 2018-11-19 15:12:40 UTC
Am I missing something?

Comment 7 Kotresh HR 2018-11-22 05:47:26 UTC
Nope, yes your right. 'detail' is associated only with 'status' and is listed correctly. But I think there is issue with 'create' and 'config' options. It should have been as below.


Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] 
     {create {[ssh-port n] no-verify|push-pem}[force]
      |start [force] 
      |stop [force]
      |pause [force]
      |resume [force]
      |config [key] [value]
      |status [detail]
      |delete [reset-sync-time]
}[options...]

So please validate following.

1. create options as mentioned above. Once create is used, either 'no-verify' or 'push-pem' should be used.
2. config takes either 'key' or 'value'
3. Checkout what options at the end refer to. I think that can be removed ?

Comment 8 Shwetha K Acharya 2018-11-26 07:41:24 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1652887

Comment 9 Shwetha K Acharya 2018-11-26 08:55:52 UTC
Upstream patch link: https://review.gluster.org/21711 (geo-rep: Geo-rep help text issue)

Comment 11 Anees Patel 2019-05-08 07:16:00 UTC
Hi Shwetha,

The o/p of help command on the latest build is as given below

# gluster v geo-replication help

Usage:
volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {\
 create [[ssh-port n] [[no-verify] | [push-pem]]] [force] \
 | start [force] \
 | stop [force] \
 | pause [force] \
 | resume [force] \
 | config [[[\!]<option>] [<value>]] \
 | status [detail] \
 | delete [reset-sync-time]} 


I think line 1 should read as

volume geo-replication [<master-volume>] [<slave-ip>]::[<slave-volume>] {\
and then the rest of output

As in RHGS docs elsewhere we have referred to geo-rep sessions as # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem [force]
The changes need to be inline with existing documentation.

Also in my opinion there is no need to print '\' at the end of each line as this pattern is not seen elsewhere in the docs, let me know what you think about it.

Comment 12 Shwetha K Acharya 2019-05-08 07:54:19 UTC
Hi Anees,

I agree that [<slave-ip>]::[<slave-volume>] is good to have.
'\' is a convention used to say, whatever written after it are part of previous line itself. So, I think there is no harm in having it. (this help text was written line after line to avoid confusions or wrong interpretations of it)

Comment 14 Shwetha K Acharya 2019-05-09 07:27:13 UTC
upstream link: https://review.gluster.org/#/c/glusterfs/+/22689/

Comment 23 errata-xmlrpc 2019-10-30 12:19:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.