Bug 1043472 - [SNAPSHOT] : snapshot cli tree structure is not present as part of the "gluster help"
Summary: [SNAPSHOT] : snapshot cli tree structure is not present as part of the "glust...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.0.0
Assignee: Vijaikumar Mallikarjuna
QA Contact: Rahul Hinduja
URL:
Whiteboard: SNAPSHOT
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-16 12:04 UTC by Rahul Hinduja
Modified: 2016-09-17 13:00 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.4.1.1.snap.feb17.2014git-1.el6.x86_64
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-22 19:30:24 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rahul Hinduja 2013-12-16 12:04:29 UTC
Description of problem:
=======================

Currently "gluster help" only list the volume and peer command family, it should also list the snapshot commands as part of the gluster help.

Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.4.0.snap.dec03.2013git-1.el6.x86_64


How reproducible:
=================
1/1

Steps to Reproduce:
==================
1. gluster help | grep -i snapshot

Actual results:
===============

[root@snapshot-09 ~]# gluster help | grep -i snapshot
[root@snapshot-09 ~]# 

[root@snapshot-09 ~]# gluster help 
volume info [all|<VOLNAME>] - list information of all volumes
volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>... [force] - create a new volume of specified type with mentioned bricks
volume delete <VOLNAME> - delete volume specified by <VOLNAME>
volume start <VOLNAME> [force] - start volume specified by <VOLNAME>
volume stop <VOLNAME> [force] - stop volume specified by <VOLNAME>
volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force] - add brick to volume <VOLNAME>
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... [start|stop|status|commit|force] - remove brick from volume <VOLNAME>
volume rebalance <VOLNAME> [fix-layout] {start|stop|status} [force] - rebalance operations
volume replace-brick <VOLNAME> <BRICK> <NEW-BRICK> {start [force]|pause|abort|status|commit [force]} - replace-brick operations
volume set <VOLNAME> <KEY> <VALUE> - set options for volume <VOLNAME>
volume help - display help for the volume command
volume log rotate <VOLNAME> [BRICK] - rotate the log file for corresponding volume/brick
volume sync <HOSTNAME> [all|<VOLNAME>] - sync the volume information from a peer
volume reset <VOLNAME> [option] [force] - reset all the reconfigured options
volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [push-pem] [force]|start [force]|stop [force]|config|status [detail]|delete} [options...] - Geo-sync operations
volume profile <VOLNAME> {start|stop|info [nfs]} - volume profile operations
volume quota <VOLNAME> <enable|disable|limit-usage|list|remove> [path] [value] - quota translator specific operations
volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] |
volume top <VOLNAME> {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>] - volume top operations
volume status [all | <VOLNAME> [nfs|shd|<BRICK>]] [detail|clients|mem|inode|fd|callpool|tasks] - display status of all or specified volume(s)/brick
volume heal <VOLNAME> [{full | statistics {heal-count {replica <hostname:brickname>}} |info {healed | heal-failed | split-brain}}] - self-heal commands on volume specified by <VOLNAME>
volume statedump <VOLNAME> [nfs] [all|mem|iobuf|callpool|priv|fd|inode|history]... - perform statedump on bricks
volume list - list all volumes in cluster
volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} - Clear locks held on path
peer probe <HOSTNAME> - probe peer specified by <HOSTNAME>
peer detach <HOSTNAME> [force] - detach peer specified by <HOSTNAME>
peer status - list status of peers
peer help - Help command for peer 
pool list - list all the nodes in the pool (including localhost)
quit - quit
help - display command options
exit - exit
[root@snapshot-09 ~]# 


Expected results:
=================

Should include snapshot commands as well

Comment 2 Vijaikumar Mallikarjuna 2014-01-06 09:58:32 UTC
Patch posted: http://review.gluster.org/#/c/6647/

Comment 3 Rahul Hinduja 2014-03-03 12:15:26 UTC
Verified with build: glusterfs-3.4.1.1.snap.feb17.2014git-1.el6.x86_64

[root@snapshot-09 ~]# gluster help | grep -i snapshot
snapshot help - display help for snapshot commands
snapshot create <volnames> [-n <snap-name|cg-name>] [-d <description>] - Snapshot Create.
snapshot restore (-v <volname> <snap-name> | -c <cg-name>) - Snapshot Restore.
snapshot list [<volnames> | <volname> [-s <snapname>] | -c <cgname> ] [-d] - Snapshot List.
snapshot config < volname | all > [ snap-max-hard-limit <count> | snap-max-soft-limit <percent> ] - Snapshot Config.
snapshot delete (<volname> -s <snapname> | -c <cgname>) [force] - Snapshot Delete.
[root@snapshot-09 ~]# 


Marking bug as verified

Comment 5 Nagaprasad Sathyanarayana 2014-04-21 06:18:17 UTC
Marking snapshot BZs to RHS 3.0.

Comment 6 Nagaprasad Sathyanarayana 2014-05-19 10:56:39 UTC
Setting flags required to add BZs to RHS 3.0 Errata

Comment 9 errata-xmlrpc 2014-09-22 19:30:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.