Description of problem: When I run: [root@dell-per730-01-priv ~]# gluster v geo-replication help Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...] I see: [force]|config|status And: [detail]|delete I think this should read: |config [detail]|status [force]|delete For example when I run: [root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol config special_sync_mode: partial gluster_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.gluster.log ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem change_detector: changelog use_meta_volume: true session_owner: 71be0011-6af3-4250-8028-65eb6563d820 state_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.status gluster_params: aux-gfid-mount acl remote_gsyncd: /nonexistent/gsyncd working_dir: /var/lib/misc/glusterfsd/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol state_detail_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-detail.status gluster_command_dir: /usr/sbin/ pid_file: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.pid georep_session_working_dir: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem master.stime_xattr_name: trusted.glusterfs.71be0011-6af3-4250-8028-65eb6563d820.91505f86-9440-47e1-a2d0-8fb817778f71.stime changelog_log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-changes.log socketdir: /var/run/gluster volume_id: 71be0011-6af3-4250-8028-65eb6563d820 ignore_deletes: false state_socket_unencoded: /var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.socket log_file: /var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.log It is successful. But when I try detail I get: [root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol config detail Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...] Also when I run: [root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 192.168.50.1 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A 192.168.50.6 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A 192.168.50.2 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A 192.168.50.3 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A 192.168.50.5 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A 192.168.50.4 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A As well as: [root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol status detail MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 192.168.50.1 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A 0 431 0 0 N/A N/A N/A 192.168.50.5 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A 192.168.50.6 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A 192.168.50.4 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A 0 792 0 0 N/A N/A N/A 192.168.50.2 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A 192.168.50.3 data /rhgs/brick2/data root gqas015.sbu.lab.eng.bos.redhat.com::georep-vol gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A N/A N/A N/A N/A N/A N/A N/A And when I run: [root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol delete detail Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...] It fails. I didn't want to delete my session so I didn't run: [root@dell-per730-01-priv ~]# gluster v geo-replication data root.lab.eng.bos.redhat.com::georep-vol delete force But I feel force is applicable here and not detail. Version-Release number of selected component (if applicable): [root@dell-per730-01-priv ~]# rpm -q glusterfs glusterfs-3.8.4-18.4.el7rhgs.x86_64 How reproducible: Every time. Steps to Reproduce: 1. Run gluster v geo-rep help 2. Look at the config / status / delete and weather force / detail apply Actual results: I think that there is a typo I listed above. Expected results: Proper usage. Additional info:
With little formatting, we could see that geo replication help outputs something like below: # gluster v geo-replication help Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] { create [[ssh-port n] [[no-verify]|[push-pem]]] [force] |start [force] |stop [force] |pause [force] |resume [force] |config |status [detail] |delete [reset-sync-time] } [options...] # It looks like the format of the help command is interpreted wrongly and used.
Am I missing something?
Nope, yes your right. 'detail' is associated only with 'status' and is listed correctly. But I think there is issue with 'create' and 'config' options. It should have been as below. Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create {[ssh-port n] no-verify|push-pem}[force] |start [force] |stop [force] |pause [force] |resume [force] |config [key] [value] |status [detail] |delete [reset-sync-time] }[options...] So please validate following. 1. create options as mentioned above. Once create is used, either 'no-verify' or 'push-pem' should be used. 2. config takes either 'key' or 'value' 3. Checkout what options at the end refer to. I think that can be removed ?
https://bugzilla.redhat.com/show_bug.cgi?id=1652887
Upstream patch link: https://review.gluster.org/21711 (geo-rep: Geo-rep help text issue)
Hi Shwetha, The o/p of help command on the latest build is as given below # gluster v geo-replication help Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {\ create [[ssh-port n] [[no-verify] | [push-pem]]] [force] \ | start [force] \ | stop [force] \ | pause [force] \ | resume [force] \ | config [[[\!]<option>] [<value>]] \ | status [detail] \ | delete [reset-sync-time]} I think line 1 should read as volume geo-replication [<master-volume>] [<slave-ip>]::[<slave-volume>] {\ and then the rest of output As in RHGS docs elsewhere we have referred to geo-rep sessions as # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem [force] The changes need to be inline with existing documentation. Also in my opinion there is no need to print '\' at the end of each line as this pattern is not seen elsewhere in the docs, let me know what you think about it.
Hi Anees, I agree that [<slave-ip>]::[<slave-volume>] is good to have. '\' is a convention used to say, whatever written after it are part of previous line itself. So, I think there is no harm in having it. (this help text was written line after line to avoid confusions or wrong interpretations of it)
upstream link: https://review.gluster.org/#/c/glusterfs/+/22689/
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249