Bug 1480907 - Geo-rep help looks to have a typo.
Geo-rep help looks to have a typo.
Status: NEW
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
3.2
x86_64 All
unspecified Severity low
: ---
: ---
Assigned To: Aravinda VK
Rahul Hinduja
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-08-12 16:15 EDT by Ben Turner
Modified: 2018-01-18 01:19 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ben Turner 2017-08-12 16:15:57 EDT
Description of problem:

When I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication help
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

I see:

[force]|config|status
And:

[detail]|delete

I think this should read:

|config
[detail]|status
[force]|delete

For example when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root@gqas015.sbu.lab.eng.bos.redhat.com::georep-vol config
special_sync_mode: partial
gluster_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
change_detector: changelog
use_meta_volume: true
session_owner: 71be0011-6af3-4250-8028-65eb6563d820
state_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.status
gluster_params: aux-gfid-mount acl
remote_gsyncd: /nonexistent/gsyncd
working_dir: /var/lib/misc/glusterfsd/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol
state_detail_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-detail.status
gluster_command_dir: /usr/sbin/
pid_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.pid
georep_session_working_dir: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
master.stime_xattr_name: trusted.glusterfs.71be0011-6af3-4250-8028-65eb6563d820.91505f86-9440-47e1-a2d0-8fb817778f71.stime
changelog_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-changes.log
socketdir: /var/run/gluster
volume_id: 71be0011-6af3-4250-8028-65eb6563d820
ignore_deletes: false
state_socket_unencoded: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.socket
log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.log

It is successful.  But when I try detail I get:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root@gqas015.sbu.lab.eng.bos.redhat.com::georep-vol config detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

Also when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root@gqas015.sbu.lab.eng.bos.redhat.com::georep-vol status
 
MASTER NODE     MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE                                             SLAVE NODE                            STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A                  
192.168.50.6    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.2    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.3    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.5    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.4    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A                  

As well as:

[root@dell-per730-01-priv ~]# gluster v geo-replication data 
root@gqas015.sbu.lab.eng.bos.redhat.com::georep-vol status detail
 
MASTER NODE     MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE                                             SLAVE NODE                            STATUS     CRAWL STATUS    LAST_SYNCED    ENTRY    DATA    META    FAILURES    CHECKPOINT TIME    CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A            0        431     0       0           N/A                N/A                     N/A                          
192.168.50.5    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.6    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.4    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A            0        792     0       0           N/A                N/A                     N/A                          
192.168.50.2    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.3    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A   

And when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root@gqas015.sbu.lab.eng.bos.redhat.com::georep-vol delete detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

It fails.  I didn't want to delete my session so I didn't run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root@gqas015.sbu.lab.eng.bos.redhat.com::georep-vol delete force

But I feel force is applicable here and not detail.

Version-Release number of selected component (if applicable):

[root@dell-per730-01-priv ~]# rpm -q glusterfs
glusterfs-3.8.4-18.4.el7rhgs.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Run gluster v geo-rep help
2.  Look at the config / status / delete and weather force / detail apply

Actual results:

I think that there is a typo I listed above.

Expected results:

Proper usage.

Additional info:

Note You need to log in before you can comment on or make changes to this bug.