Description of problem: Currently in gluster CLI, we use 'cli_out()' for both success case, and failure case. But in ideal situation, one should be able to see the errors in 'stderr' instead of 'stdout'.
CHANGE: http://review.gluster.com/3208 (cli: implement a fn 'cli_err()' to send error messages to 'stderr') merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/3229 (cli: Make use of cli_err()) merged in master by Anand Avati (avati)
[root@supernova ~]# gluster volume set abcd ping ok >/dev/null Volume abcd does not exist Set volume unsuccessful [root@supernova ~]# gluster volume set test write-behind off >/dev/null [root@supernova ~]# Notice that errors are in stderr now (as stdout was redirected to >/dev/null)
Re-opening to get more suggestions on CLI improvements.
It would be very useful if you would provide a text version of the complete status of the gluster volumes. Normally I use: gluster volume status all detail and gluster volume info It would be nice to get all that info in a concise single command. ie: shows configuration as well as current status That may not be logical from your point of view, but it sure helps when you're trying to figure out what wrong. Also, it would be good to be able to distinguish the output from ------------------------------------------------------- $ gluster peer status Number of Peers: 3 Hostname: 10.255.77.2 Uuid: 3fcd023c-9cc9-4d1c-84c4-babfb4492e38 State: Peer in Cluster (Connected) Hostname: pbs2ib Uuid: 26de63bd-c5b7-48ba-b81d-5d77a533d077 State: Peer in Cluster (Connected) Hostname: pbs4ib Uuid: 2a593581-bf45-446c-8f7c-212c53297803 State: Peer in Cluster (Connected) ------------------------------------------------------- which /implies/ (but, granted, doesn't state) that all the bricks are connected from the actual gluster /brick/ status revealed by 'gluster volume status all detail' in which you can tell which bricks are not only connected but ONLINE. I would say (STRONGLY) that the 'gluster peer status' command should emit warning messages if it is connected but a brick is offline. Just sayin... hjm
As discussed on mailing list, please don't remove the --xml flag as I'm using this switch to parse output from automatic scripts such as puppet. It would make sense that this should be a stable interface across major versions changes too if possible. Thanks! James PS: my puppet module using such --xml is available here: https://github.com/purpleidea/puppet-gluster the latest commits that use this feature will appear within the week I think.
It would be great is the volume create command gave a message like: srv14:/content/sg13/vd00 or a prefix of it is already marked as part of a volume (extended attribute trusted.glusterfs.volume-id exists on /content/sg13/vd00) When creation fails due to extended attributes, so we could be pointed in the right direction.
Also, if possible please modify the command gluster volume rebalance <VOLUME> status so that the order of Gluster servers in the output is always the same, i.e. to be able to monitor a rebalance operation progress with the "watch" command.
In the rebalance logs called "VOLNAME-rebalance.log", please reformat the output in case of error so that a space is inserted between the filename and the string "gfid not present": E [dht-rebalance.c:1328:gf_defrag_fix_layout] 0-storage-dht: /path/to/filenamegfid not present so that the filename can be easily extracted via grep or other text utilities.
CHANGE: http://review.gluster.org/4007 (cli: introduce "--" as option terminator) merged in master by Anand Avati (avati)
Reducing the priority, as few of the improvements are already done in upstream (3.4.x branch). Keeping it open to make sure we have all the commands covered for script friendlyness.
CHANGE: http://review.gluster.org/4531 (cluster/dht: improvement in rebalance logs) merged in master by Anand Avati (avati)
Is there something wrong with the suggestion in comment 7? It seems like mentioning the trusted.glusterfs.volume-id extended attribute would be helpful for users.
Justin, comment #7 looks totally right. We will work on it with couple of other fixes in 'volume create'.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user