Bug 1652887 - Geo-rep help looks to have a typo.
Summary: Geo-rep help looks to have a typo.
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: All
low
low
Target Milestone: ---
Assignee: Shwetha K Acharya
QA Contact:
URL:
Whiteboard:
Depends On: 1480907
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-23 12:03 UTC by Shwetha K Acharya
Modified: 2019-05-23 15:26 UTC (History)
10 users (show)

Fixed In Version: glusterfs-6.0
Clone Of: 1480907
Environment:
Last Closed: 2019-05-23 15:26:09 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21711 0 None Merged geo-rep: Geo-rep help text issue 2018-11-27 14:02:44 UTC
Gluster.org Gerrit 22689 0 None Merged geo-rep: Geo-rep help text issue 2019-05-23 15:26:08 UTC

Comment 1 Worker Ant 2018-11-23 12:11:35 UTC
REVIEW: https://review.gluster.org/21711 (geo-rep: Geo-rep help text issue) posted (#1) for review on master by Shwetha K Acharya

Comment 2 Kotresh HR 2018-11-27 13:19:30 UTC
Description of problem:

When I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication help
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

I see:

[force]|config|status
And:

[detail]|delete

I think this should read:

|config
[detail]|status
[force]|delete

For example when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol config
special_sync_mode: partial
gluster_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
change_detector: changelog
use_meta_volume: true
session_owner: 71be0011-6af3-4250-8028-65eb6563d820
state_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.status
gluster_params: aux-gfid-mount acl
remote_gsyncd: /nonexistent/gsyncd
working_dir: /var/lib/misc/glusterfsd/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol
state_detail_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-detail.status
gluster_command_dir: /usr/sbin/
pid_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.pid
georep_session_working_dir: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
master.stime_xattr_name: trusted.glusterfs.71be0011-6af3-4250-8028-65eb6563d820.91505f86-9440-47e1-a2d0-8fb817778f71.stime
changelog_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-changes.log
socketdir: /var/run/gluster
volume_id: 71be0011-6af3-4250-8028-65eb6563d820
ignore_deletes: false
state_socket_unencoded: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.socket
log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.log

It is successful.  But when I try detail I get:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol config detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

Also when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol status
 
MASTER NODE     MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE                                             SLAVE NODE                            STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A                  
192.168.50.6    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.2    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.3    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.5    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A                  
192.168.50.4    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A                  

As well as:

[root@dell-per730-01-priv ~]# gluster v geo-replication data 
root.lab.eng.bos.redhat.com::georep-vol status detail
 
MASTER NODE     MASTER VOL    MASTER BRICK         SLAVE USER    SLAVE                                             SLAVE NODE                            STATUS     CRAWL STATUS    LAST_SYNCED    ENTRY    DATA    META    FAILURES    CHECKPOINT TIME    CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A            0        431     0       0           N/A                N/A                     N/A                          
192.168.50.5    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.6    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.4    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas015.sbu.lab.eng.bos.redhat.com    Active     Hybrid Crawl    N/A            0        792     0       0           N/A                N/A                     N/A                          
192.168.50.2    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas011.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A                          
192.168.50.3    data          /rhgs/brick2/data    root          gqas015.sbu.lab.eng.bos.redhat.com::georep-vol    gqas014.sbu.lab.eng.bos.redhat.com    Passive    N/A             N/A            N/A      N/A     N/A     N/A         N/A                N/A                     N/A   

And when I run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol delete detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...]

It fails.  I didn't want to delete my session so I didn't run:

[root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol delete force

But I feel force is applicable here and not detail.

Version-Release number of selected component (if applicable):

[root@dell-per730-01-priv ~]# rpm -q glusterfs
glusterfs-3.8.4-18.4.el7rhgs.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Run gluster v geo-rep help
2.  Look at the config / status / delete and weather force / detail apply

Actual results:

I think that there is a typo I listed above.

Expected results:

Proper Usage

Comment 3 Kotresh HR 2018-11-27 13:23:52 UTC
With little formatting, we could see that geo replication help outputs something like below:

# gluster v geo-replication help
Usage:

volume geo-replication [<VOLNAME>] [<SLAVE-URL>] 
{
	create [[ssh-port n] [[no-verify]|[push-pem]]] [force]
        |start [force]
        |stop [force]
        |pause [force]
        |resume [force]
        |config
        |status [detail]
        |delete [reset-sync-time]
} [options...]
#

It looks like the format of the help command is interpreted wrongly and used.

-------------------------

Nope, yes your right. 'detail' is associated only with 'status' and is listed correctly. But I think there is issue with 'create' and 'config' options. It should have been as below.


Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] 
     {create {[ssh-port n] no-verify|push-pem}[force]
      |start [force] 
      |stop [force]
      |pause [force]
      |resume [force]
      |config [key] [value]
      |status [detail]
      |delete [reset-sync-time]
}[options...]

So please validate following.

1. create options as mentioned above. Once create is used, either 'no-verify' or 'push-pem' should be used.
2. config takes either 'key' or 'value'
3. Checkout what options at the end refer to. I think that can be removed ?

Comment 4 Worker Ant 2018-11-27 14:02:40 UTC
REVIEW: https://review.gluster.org/21711 (geo-rep: Geo-rep help text issue) posted (#8) for review on master by Kotresh HR

Comment 5 Shyamsundar 2019-03-25 16:32:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 6 Worker Ant 2019-05-09 07:02:30 UTC
REVIEW: https://review.gluster.org/22689 (geo-rep: Geo-rep help text issue) posted (#1) for review on master by Shwetha K Acharya

Comment 7 Worker Ant 2019-05-23 15:26:09 UTC
REVIEW: https://review.gluster.org/22689 (geo-rep: Geo-rep help text issue) merged (#3) on master by Kotresh HR


Note You need to log in before you can comment on or make changes to this bug.