Bug 1498730
Summary: | The output of the "gluster help" command is difficult to read | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nithya Balachandran <nbalacha> |
Component: | cli | Assignee: | Nithya Balachandran <nbalacha> |
Status: | CLOSED ERRATA | QA Contact: | Rochelle <rallan> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | rhgs-3.3 | CC: | amukherj, bugs, nbalacha, nchilaka, rallan, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | RHGS 3.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.12.2-2 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1474768 | Environment: | |
Last Closed: | 2018-09-04 06:36:24 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1474768, 1509786, 1509789 | ||
Bug Blocks: | 1503134 |
Description
Nithya Balachandran
2017-10-05 05:19:34 UTC
As mentioned in comment 1, the proposition for the "gluster help" command is reflected in the latest builds: [root@dhcp41-161 ~]# rpm -qa | grep gluster vdsm-gluster-4.17.33-1.2.el7rhgs.noarch glusterfs-libs-3.12.2-4.el7rhgs.x86_64 glusterfs-api-3.12.2-4.el7rhgs.x86_64 glusterfs-rdma-3.12.2-4.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-12.el7.x86_64 python2-gluster-3.12.2-4.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-3.12.2-4.el7rhgs.x86_64 glusterfs-fuse-3.12.2-4.el7rhgs.x86_64 glusterfs-cli-3.12.2-4.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-4.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-4.el7rhgs.x86_64 glusterfs-server-3.12.2-4.el7rhgs.x86_64 [root@dhcp41-161 ~]# [root@dhcp41-161 ~]# gluster help peer help - display help for peer commands volume help - display help for volume commands volume bitrot help - display help for volume bitrot commands volume quota help - display help for volume quota commands volume tier help - display help for volume tier commands snapshot help - display help for snapshot commands global help - list global commands #1. [root@dhcp41-161 ~]# gluster peer help gluster peer commands ====================== peer detach { <HOSTNAME> | <IP-address> } [force] - detach peer specified by <HOSTNAME> peer help - display help for peer commands peer probe { <HOSTNAME> | <IP-address> } - probe peer specified by <HOSTNAME> peer status - list status of peers pool list - list all the nodes in the pool (including localhost) #2. volume add-brick <VOLNAME> [<stripe|replica> <COUNT> [arbiter <COUNT>]] <NEW-BRICK> ... [force] - add brick to volume <VOLNAME> volume barrier <VOLNAME> {enable|disable} - Barrier/unbarrier file operations on a volume volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} - Clear locks held on path volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force] - create a new volume of specified type with mentioned bricks volume delete <VOLNAME> - delete volume specified by <VOLNAME> volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} [options...] - Geo-sync operations volume get <VOLNAME|all> <key|all> - Get the value of the all options or given option for volume <VOLNAME> or all option. gluster volume get all all is to get all global options volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [summary | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]} |granular-entry-heal {enable | disable}] - self-heal commands on volume specified by <VOLNAME> volume help - display help for volume commands volume info [all|<VOLNAME>] - list information of all volumes volume list - list all volumes in cluster volume log <VOLNAME> rotate [BRICK] - rotate the log file for corresponding volume/brick volume log rotate <VOLNAME> [BRICK] - rotate the log file for corresponding volume/brick NOTE: This is an old syntax, will be deprecated from next release. volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumulative|clear]|stop} [nfs] - volume profile operations volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}} - rebalance operations volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> - remove brick from volume <VOLNAME> volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} - replace-brick operations volume reset <VOLNAME> [option] [force] - reset all the reconfigured options volume reset-brick <VOLNAME> <SOURCE-BRICK> {{start} | {<NEW-BRICK> commit}} - reset-brick operations volume set <VOLNAME> <KEY> <VALUE> - set options for volume <VOLNAME> volume start <VOLNAME> [force] - start volume specified by <VOLNAME> volume statedump <VOLNAME> [[nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]... | [client <hostname:process-id>]] - perform statedump on bricks volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad|tierd]] [detail|clients|mem|inode|fd|callpool|tasks] - display status of all or specified volume(s)/brick volume stop <VOLNAME> [force] - stop volume specified by <VOLNAME> volume sync <HOSTNAME> [all|<VOLNAME>] - sync the volume information from a peer volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] | volume top <VOLNAME> {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>] - volume top operations #3. [root@dhcp41-161 ~]# gluster volume bitrot help gluster bitrot commands ======================== volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand} - Pause/resume the scrubber for <VOLNAME>. Status displays the status of the scrubber. ondemand starts the scrubber immediately. volume bitrot <VOLNAME> scrub-frequency {hourly|daily|weekly|biweekly|monthly} - Set the frequency of the scrubber for volume <VOLNAME> volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive} - Set the speed of the scrubber for volume <VOLNAME> volume bitrot <VOLNAME> {enable|disable} - Enable/disable bitrot for volume <VOLNAME> volume bitrot help - display help for volume bitrot commands #4. [root@dhcp41-161 ~]# gluster volume quota help gluster quota commands ======================= volume inode-quota <VOLNAME> enable - Enable/disable inode-quota for <VOLNAME> volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>} - Set quota timeout for <VOLNAME> volume quota <VOLNAME> {enable|disable|list [<path> ...]| list-objects [<path> ...] | remove <path>| remove-objects <path> | default-soft-limit <percent>} - Enable/disable and configure quota for <VOLNAME> volume quota <VOLNAME> {limit-objects <path> <number> [<percent>]} - Set the maximum number of entries allowed in <path> for <VOLNAME> volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} - Set maximum size for <path> for <VOLNAME> volume quota help - display help for volume quota commands #5. [root@dhcp41-161 ~]# gluster volume tier help gluster tier commands ====================== volume attach-tier <VOLNAME> [<replica COUNT>] <NEW-BRICK>... - NOTE: this is old syntax, will be deprecated in next release. Please use gluster volume tier <vol> attach [<replica COUNT>] <NEW-BRICK>... volume detach-tier <VOLNAME> <start|stop|status|commit|force> - NOTE: this is old syntax, will be deprecated in next release. Please use gluster volume tier <vol> detach {start|stop|commit} [force] volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force] - Attach a hot tier to <VOLNAME> volume tier <VOLNAME> detach <start|stop|status|commit|[force]> - Detach the hot tier from <VOLNAME> volume tier <VOLNAME> start [force] - Start the tier service for <VOLNAME> volume tier <VOLNAME> status - Display tier status for <VOLNAME> volume tier <VOLNAME> stop [force] - Stop the tier service for <VOLNAME> volume tier help - display help for volume tier commands #6. [root@dhcp41-161 ~]# gluster snapshot help gluster snapshot commands ========================= snapshot activate <snapname> [force] - Activate snapshot volume. snapshot clone <clonename> <snapname> - Snapshot Clone. snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])| ([activate-on-create <enable|disable>]) - Snapshot Config. snapshot create <snapname> <volname> [no-timestamp] [description <description>] [force] - Snapshot Create. snapshot deactivate <snapname> - Deactivate snapshot volume. snapshot delete (all | snapname | volume <volname>) - Snapshot Delete. snapshot help - display help for snapshot commands snapshot info [(snapname | volume <volname>)] - Snapshot Info. snapshot list [volname] - Snapshot List. snapshot restore <snapname> - Snapshot Restore. snapshot status [(snapname | volume <volname>)] - Snapshot Status. #7. [root@dhcp41-161 ~]# gluster global help gluster global commands ======================== get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]] [detail|volumeoptions] - Get local state representation of mentioned daemon global help - list global commands nfs-ganesha {enable| disable} - Enable/disable NFS-Ganesha support Moving this bug to verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607 |