Bug 815194 - [FEAT] make CLI more script friendly
[FEAT] make CLI more script friendly
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
mainline
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: Avra Sengupta
Ric Wheeler
: FutureFeature
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-04-23 01:37 EDT by Amar Tumballi
Modified: 2014-04-17 07:38 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-04-17 07:38:36 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Amar Tumballi 2012-04-23 01:37:01 EDT
Description of problem:
Currently in gluster CLI, we use 'cli_out()' for both success case, and failure case. But in ideal situation, one should be able to see the errors in 'stderr' instead of 'stdout'.
Comment 1 Anand Avati 2012-04-25 03:45:59 EDT
CHANGE: http://review.gluster.com/3208 (cli: implement a fn 'cli_err()' to send error messages to 'stderr') merged in master by Vijay Bellur (vijay@gluster.com)
Comment 2 Anand Avati 2012-04-27 01:50:48 EDT
CHANGE: http://review.gluster.com/3229 (cli: Make use of cli_err()) merged in master by Anand Avati (avati@redhat.com)
Comment 3 Amar Tumballi 2012-05-31 04:09:24 EDT
[root@supernova ~]# gluster volume set abcd ping ok >/dev/null
Volume abcd does not exist
Set volume unsuccessful
[root@supernova ~]# gluster volume set test write-behind off >/dev/null
[root@supernova ~]# 

Notice that errors are in stderr now (as stdout was redirected to >/dev/null)
Comment 4 Vijay Bellur 2012-08-09 12:32:16 EDT
Re-opening to get more suggestions on CLI improvements.
Comment 5 Harry Mangalam 2012-08-09 13:27:50 EDT
It would be very useful if you would provide a text version of the complete status of the gluster volumes.  Normally I use:



gluster volume status all detail
  and
gluster volume info

It would be nice to get all that info in a concise single command.
ie: shows configuration as well as current status

That may not be logical from your point of view, but it sure helps when you're trying to figure out what wrong.

Also, it would be good to be able to distinguish the output from 
-------------------------------------------------------
$ gluster peer status
Number of Peers: 3

Hostname: 10.255.77.2
Uuid: 3fcd023c-9cc9-4d1c-84c4-babfb4492e38
State: Peer in Cluster (Connected)

Hostname: pbs2ib
Uuid: 26de63bd-c5b7-48ba-b81d-5d77a533d077
State: Peer in Cluster (Connected)

Hostname: pbs4ib
Uuid: 2a593581-bf45-446c-8f7c-212c53297803
State: Peer in Cluster (Connected)
-------------------------------------------------------

which /implies/ (but, granted, doesn't state) that all the bricks  are connected from the actual gluster /brick/ status revealed by 'gluster volume status all detail' in which you can tell which bricks are not only connected but ONLINE.  

I would say (STRONGLY) that the 'gluster peer status' command should emit warning messages if it is connected but a brick is offline.

Just sayin...

hjm
Comment 6 purpleidea 2012-08-09 20:04:18 EDT
As discussed on mailing list, please don't remove the --xml flag as I'm using this switch to parse output from automatic scripts such as puppet. It would make sense that this should be a stable interface across major versions changes too if possible.

Thanks!
James

PS: my puppet module using such --xml is available here: https://github.com/purpleidea/puppet-gluster the latest commits that use this feature will appear within the week I think.
Comment 7 Jeff Williams 2012-08-10 05:06:39 EDT
It would be great is the volume create command gave a message like:

srv14:/content/sg13/vd00 or a prefix of it is already marked as part of a volume (extended attribute trusted.glusterfs.volume-id exists on /content/sg13/vd00)

When creation fails due to extended attributes, so we could be pointed in the right direction.
Comment 8 mailbox 2012-08-20 05:52:39 EDT
Also, if possible please modify the command

gluster volume rebalance <VOLUME> status

so that the order of Gluster servers in the output is always the same,
i.e. to be able to monitor a rebalance operation progress with the
"watch" command.
Comment 9 mailbox 2012-08-20 06:17:36 EDT
In the rebalance logs called "VOLNAME-rebalance.log", please reformat the output in case of error so that a space is inserted between the filename and the string "gfid not present":

E [dht-rebalance.c:1328:gf_defrag_fix_layout] 0-storage-dht: /path/to/filenamegfid not present

so that the filename can be easily extracted via grep or other text utilities.
Comment 10 Vijay Bellur 2012-10-11 21:11:46 EDT
CHANGE: http://review.gluster.org/4007 (cli: introduce "--" as option terminator) merged in master by Anand Avati (avati@redhat.com)
Comment 11 Amar Tumballi 2012-12-24 02:03:13 EST
Reducing the priority, as few of the improvements are already done in upstream (3.4.x branch). Keeping it open to make sure we have all the commands covered for script friendlyness.
Comment 12 Vijay Bellur 2013-02-18 01:01:02 EST
CHANGE: http://review.gluster.org/4531 (cluster/dht: improvement in rebalance logs) merged in master by Anand Avati (avati@redhat.com)
Comment 13 Justin Clift 2013-03-07 21:25:42 EST
Is there something wrong with the suggestion in comment 7?  It seems like  mentioning the trusted.glusterfs.volume-id extended attribute would be helpful for users.
Comment 14 Amar Tumballi 2013-03-07 22:53:17 EST
Justin, comment #7 looks totally right. We will work on it with couple of other fixes in 'volume create'.
Comment 18 Niels de Vos 2014-04-17 07:38:36 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.