Bug 1395603 - [RFE] JSON output for all Events CLI commands
Summary: [RFE] JSON output for all Events CLI commands
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: eventsapi
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Aravinda VK
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On: 1357753 1400845
Blocks: 1351503
TreeView+ depends on / blocked
 
Reported: 2016-11-16 09:45 UTC by Aravinda VK
Modified: 2017-03-23 06:19 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8.4-7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1357753
Environment:
Last Closed: 2017-03-23 06:19:30 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Aravinda VK 2016-11-16 09:45:24 UTC
+++ This bug was initially created as a clone of Bug #1357753 +++

Description of problem:
To consume the APIs programatically, provide JSON or XML output for gluster-eventsapi commands.

Example, 

gluster-eventsapi start --json
gluster-eventsapi status --json

Comment 2 Aravinda VK 2016-11-17 11:27:52 UTC
Upstream patch sent for review
http://review.gluster.org/15867

Comment 5 Aravinda VK 2016-12-02 10:57:42 UTC
Upstream patches: (     master) http://review.gluster.org/15867
                  (release-3.9) http://review.gluster.org/16008

Downstream Patch: https://code.engineering.redhat.com/gerrit/91988

Comment 7 Sweta Anandpara 2016-12-17 17:00:30 UTC
Tested and verified this on the build 3.8.4-8

Json output is seen corectly for all gluster-eventsapi commands. Also the help message of every option displays that it can be displayed in json.
Detailed logs are pasted below. Moving this BZ to verified in 3.2

[root@dhcp47-60 ~]# rpm -qa | grep gluster
glusterfs-3.8.4-8.el7rhgs.x86_64
glusterfs-cli-3.8.4-8.el7rhgs.x86_64
glusterfs-api-3.8.4-8.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-8.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.el7rhgs.noarch
gluster-nagios-addons-0.2.8-1.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-8.el7rhgs.x86_64
glusterfs-server-3.8.4-8.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-fuse-3.8.4-8.el7rhgs.x86_64
glusterfs-events-3.8.4-8.el7rhgs.x86_64
glusterfs-libs-3.8.4-8.el7rhgs.x86_64
python-gluster-3.8.4-8.el7rhgs.noarch
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.47.61
Uuid: f4b259db-7add-4d01-bb5e-3c7f9c077bb4
State: Peer in Cluster (Connected)

Hostname: 10.70.47.26
Uuid: 95c24075-02aa-49c1-a1e4-c7e0775e7128
State: Peer in Cluster (Connected)

Hostname: 10.70.47.27
Uuid: 8d1aaf3a-059e-41c2-871b-6c7f5c0dd90b
State: Peer in Cluster (Connected)
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# gluster-eventsapi webhook-test http://10.70.46.246:9000/listen --json
{"output": [{"node": "10.70.47.61", "node_status": "UP", "webhook_status": "NOT OK", "error": "('Connection aborted.', error(113, 'No route to host'))"}, {"node": "10.70.47.26", "node_status": "UP", "webhook_status": "NOT OK", "error": "('Connection aborted.', error(113, 'No route to host'))"}, {"node": "10.70.47.27", "node_status": "UP", "webhook_status": "NOT OK", "error": "('Connection aborted.', error(113, 'No route to host'))"}, {"node": "localhost", "node_status": "UP", "webhook_status": "NOT OK", "error": "('Connection aborted.', error(113, 'No route to host'))"}], "error": ""}
[root@dhcp47-60 ~]# gluster-eventsapi status --json
{"output": {"webhooks": ["http://10.70.46.245:9000/listen"], "data": [{"node": "10.70.47.61", "node_status": "UP", "glustereventsd_status": "OK", "error": ""}, {"node": "10.70.47.26", "node_status": "UP", "glustereventsd_status": "OK", "error": ""}, {"node": "10.70.47.27", "node_status": "UP", "glustereventsd_status": "OK", "error": ""}, {"node": "localhost", "node_status": "UP", "glustereventsd_status": "OK", "error": ""}]}, "error": ""}
[root@dhcp47-60 ~]#
[root@dhcp47-60 ~]# gluster-eventsapi reload --json
{"output": [{"node": "10.70.47.61", "node_status": "UP", "reload_status": "OK", "error": ""}, {"node": "10.70.47.26", "node_status": "UP", "reload_status": "OK", "error": ""}, {"node": "10.70.47.27", "node_status": "UP", "reload_status": "OK", "error": ""}, {"node": "localhost", "node_status": "UP", "reload_status": "OK", "error": ""}], "error": ""}
[root@dhcp47-60 ~]# gluster-eventsapi config-get --json
{"output": {"log_level": "INFO", "port": 24009}, "error": ""}
[root@dhcp47-60 ~]# gluster-eventsapi config-set log_level DEBUG
+-------------+-------------+-------------+
|     NODE    | NODE STATUS | SYNC STATUS |
+-------------+-------------+-------------+
| 10.70.47.61 |          UP |          OK |
| 10.70.47.26 |          UP |          OK |
| 10.70.47.27 |          UP |          OK |
|  localhost  |          UP |          OK |
+-------------+-------------+-------------+
[root@dhcp47-60 ~]# gluster-eventsapi config-set log_level DEBUG --json
{"output": "", "error": "Config value not changed. Same config"}
[root@dhcp47-60 ~]# gluster-eventsapi config-set log_level INFO --json
{"output": [{"node": "10.70.47.61", "sync_status": "OK", "node_status": "UP", "error": ""}, {"node": "10.70.47.26", "sync_status": "OK", "node_status": "UP", "error": ""}, {"node": "10.70.47.27", "sync_status": "OK", "node_status": "UP", "error": ""}, {"node": "localhost", "sync_status": "OK", "node_status": "UP", "error": ""}], "error": ""}
[root@dhcp47-60 ~]#
[root@dhcp47-60 ~]# gluster-eventsapi config-reset port --json
{"output": "", "error": "Config value not reset. Already set to default value"}
[root@dhcp47-60 ~]# gluster-eventsapi sync --json
{"output": [{"node": "10.70.47.61", "sync_status": "OK", "node_status": "UP", "error": ""}, {"node": "10.70.47.26", "sync_status": "OK", "node_status": "UP", "error": ""}, {"node": "10.70.47.27", "sync_status": "OK", "node_status": "UP", "error": ""}, {"node": "localhost", "sync_status": "OK", "node_status": "UP", "error": ""}], "error": ""}
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# 
[root@dhcp47-60 ~]# gluster-eventsapi --help
usage: gluster-eventsapi [-h]
                         
                         {reload,status,webhook-add,webhook-mod,webhook-del,webhook-test,config-get,config-set,config-reset,sync}
                         ...

positional arguments:
  {reload,status,webhook-add,webhook-mod,webhook-del,webhook-test,config-get,config-set,config-reset,sync}

optional arguments:
  -h, --help            show this help message and exit
[root@dhcp47-60 ~]# gluster-eventsapi reload -h
usage: gluster-eventsapi reload [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi status -h 
usage: gluster-eventsapi status [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi webhook-add -h 
usage: gluster-eventsapi webhook-add [-h] [--bearer_token BEARER_TOKEN]
                                     [--json]
                                     url

positional arguments:
  url                   URL of Webhook

optional arguments:
  -h, --help            show this help message and exit
  --bearer_token BEARER_TOKEN, -t BEARER_TOKEN
                        Bearer Token
  --json                JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi webhook-mod -h 
usage: gluster-eventsapi webhook-mod [-h] [--bearer_token BEARER_TOKEN]
                                     [--json]
                                     url

positional arguments:
  url                   URL of Webhook

optional arguments:
  -h, --help            show this help message and exit
  --bearer_token BEARER_TOKEN, -t BEARER_TOKEN
                        Bearer Token
  --json                JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi webhook-del -h 
usage: gluster-eventsapi webhook-del [-h] [--json] url

positional arguments:
  url         URL of Webhook

optional arguments:
  -h, --help  show this help message and exit
  --json      JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi webhook-test -h 
usage: gluster-eventsapi webhook-test [-h] [--bearer_token BEARER_TOKEN]
                                      [--json]
                                      url

positional arguments:
  url                   URL of Webhook

optional arguments:
  -h, --help            show this help message and exit
  --bearer_token BEARER_TOKEN, -t BEARER_TOKEN
                        Bearer Token
  --json                JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi config-get -h 
usage: gluster-eventsapi config-get [-h] [--name NAME] [--json]

optional arguments:
  -h, --help   show this help message and exit
  --name NAME  Config Name
  --json       JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi config-set -h 
usage: gluster-eventsapi config-set [-h] [--json] name value

positional arguments:
  name        Config Name
  value       Config Value

optional arguments:
  -h, --help  show this help message and exit
  --json      JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi config-reset -h 
usage: gluster-eventsapi config-reset [-h] [--json] name

positional arguments:
  name        Config Name or all

optional arguments:
  -h, --help  show this help message and exit
  --json      JSON Output
[root@dhcp47-60 ~]# gluster-eventsapi sync -h 
usage: gluster-eventsapi sync [-h] [--json]

optional arguments:
  -h, --help  show this help message and exit
  --json      JSON Output
[root@dhcp47-60 ~]#

Comment 9 errata-xmlrpc 2017-03-23 06:19:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.