Bug 1603175 - GET /clusters api call returns "Invalid JSON received." for cluster with geo-replication
Summary: GET /clusters api call returns "Invalid JSON received." for cluster with geo-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-gluster-integration
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: Shubhendu Tripathi
QA Contact: Daniel Horák
URL:
Whiteboard:
Depends On:
Blocks: 1503137 1517422 1518276
TreeView+ depends on / blocked
 
Reported: 2018-07-19 11:34 UTC by Daniel Horák
Modified: 2018-09-04 07:09 UTC (History)
6 users (show)

Fixed In Version: tendrl-gluster-integration-1.6.3-8.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 07:08:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github Tendrl gluster-integration issues 684 0 None None None 2018-07-20 15:24:01 UTC
Red Hat Product Errata RHSA-2018:2616 0 None None None 2018-09-04 07:09:57 UTC

Description Daniel Horák 2018-07-19 11:34:44 UTC
Description of problem:
  When I try to import cluster with configured geo-replication, GET .../clusters
  api call starts to return "Invalid JSON received."

Version-Release number of selected component (if applicable):
  RHGS WA Server:
  Red Hat Enterprise Linux Server release 7.5 (Maipo)
  tendrl-ansible-1.6.3-5.el7rhgs.noarch
  tendrl-api-1.6.3-4.el7rhgs.noarch
  tendrl-api-httpd-1.6.3-4.el7rhgs.noarch
  tendrl-commons-1.6.3-9.el7rhgs.noarch
  tendrl-grafana-plugins-1.6.3-7.el7rhgs.noarch
  tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch
  tendrl-node-agent-1.6.3-9.el7rhgs.noarch
  tendrl-notifier-1.6.3-4.el7rhgs.noarch
  tendrl-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-ui-1.6.3-8.el7rhgs.noarch

  Gluster Storage Server:
  Red Hat Enterprise Linux Server release 7.5 (Maipo)
  Red Hat Gluster Storage Server 3.4.0
  glusterfs-3.12.2-14.el7rhgs.x86_64
  glusterfs-api-3.12.2-14.el7rhgs.x86_64
  glusterfs-cli-3.12.2-14.el7rhgs.x86_64
  glusterfs-client-xlators-3.12.2-14.el7rhgs.x86_64
  glusterfs-events-3.12.2-14.el7rhgs.x86_64
  glusterfs-fuse-3.12.2-14.el7rhgs.x86_64
  glusterfs-geo-replication-3.12.2-14.el7rhgs.x86_64
  glusterfs-libs-3.12.2-14.el7rhgs.x86_64
  glusterfs-rdma-3.12.2-14.el7rhgs.x86_64
  glusterfs-server-3.12.2-14.el7rhgs.x86_64
  tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-commons-1.6.3-9.el7rhgs.noarch
  tendrl-gluster-integration-1.6.3-7.el7rhgs.noarch
  tendrl-node-agent-1.6.3-9.el7rhgs.noarch
  tendrl-selinux-1.5.4-2.el7rhgs.noarch

How reproducible:
  100%

Steps to Reproduce:
1. Prepare two clusters and configure geo-replication.
  (for example as described in Bug 1578716)
2. Try to import the cluster configured as geo-replication master.
3. Watch http://tendrl-server.example.com/#/clusters page or
  http://tendrl-server.example.com/api/1.0/clusters

Actual results:
  After short time, the #/clusters web page shows:
    "No Clusters Detected",
  and the GET /clusters api call returns: 
    {"errors":{"message":"Invalid JSON received."}}

Expected results:
  The GET /clusters call should return correct list of available clusters.

Additional info:
  There seems to be no real error reported in /var/log/messages.

  The Import Cluster job launched in step 2 fails, that might be another BZ.

Comment 1 Daniel Horák 2018-07-20 09:03:44 UTC
Marking as TestBlocker, because it blocks any testing related to Geo-Replication (e.g bug 1517422 and bug 1518276).

Comment 3 Daniel Horák 2018-08-03 08:39:48 UTC
Tested and verified on:
RHGS WA Server:
  Red Hat Enterprise Linux Server release 7.5 (Maipo)
  collectd-5.7.2-3.1.el7rhgs.x86_64
  collectd-ping-5.7.2-3.1.el7rhgs.x86_64
  etcd-3.2.7-1.el7.x86_64
  libcollectdclient-5.7.2-3.1.el7rhgs.x86_64
  python-etcd-0.4.5-2.el7rhgs.noarch
  rubygem-etcd-0.3.0-2.el7rhgs.noarch
  tendrl-ansible-1.6.3-6.el7rhgs.noarch
  tendrl-api-1.6.3-5.el7rhgs.noarch
  tendrl-api-httpd-1.6.3-5.el7rhgs.noarch
  tendrl-commons-1.6.3-11.el7rhgs.noarch
  tendrl-grafana-plugins-1.6.3-8.el7rhgs.noarch
  tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-monitoring-integration-1.6.3-8.el7rhgs.noarch
  tendrl-node-agent-1.6.3-9.el7rhgs.noarch
  tendrl-notifier-1.6.3-4.el7rhgs.noarch
  tendrl-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-ui-1.6.3-9.el7rhgs.noarch

Gluster Storage Server:
  Red Hat Enterprise Linux Server release 7.5 (Maipo)
  Red Hat Gluster Storage Server 3.4.0
  collectd-5.7.2-3.1.el7rhgs.x86_64
  collectd-ping-5.7.2-3.1.el7rhgs.x86_64
  glusterfs-3.12.2-15.el7rhgs.x86_64
  glusterfs-api-3.12.2-15.el7rhgs.x86_64
  glusterfs-cli-3.12.2-15.el7rhgs.x86_64
  glusterfs-client-xlators-3.12.2-15.el7rhgs.x86_64
  glusterfs-events-3.12.2-15.el7rhgs.x86_64
  glusterfs-fuse-3.12.2-15.el7rhgs.x86_64
  glusterfs-geo-replication-3.12.2-15.el7rhgs.x86_64
  glusterfs-libs-3.12.2-15.el7rhgs.x86_64
  glusterfs-rdma-3.12.2-15.el7rhgs.x86_64
  glusterfs-server-3.12.2-15.el7rhgs.x86_64
  gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
  gluster-nagios-common-0.2.4-1.el7rhgs.noarch
  libcollectdclient-5.7.2-3.1.el7rhgs.x86_64
  libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64
  python2-gluster-3.12.2-15.el7rhgs.x86_64
  python-etcd-0.4.5-2.el7rhgs.noarch
  tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch
  tendrl-commons-1.6.3-11.el7rhgs.noarch
  tendrl-gluster-integration-1.6.3-9.el7rhgs.noarch
  tendrl-node-agent-1.6.3-9.el7rhgs.noarch
  tendrl-selinux-1.5.4-2.el7rhgs.noarch
  vdsm-gluster-4.19.43-2.3.el7rhgs.noarch

API GET /clusters call properly return list of clusters.

>> VERIFIED

Comment 5 errata-xmlrpc 2018-09-04 07:08:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616


Note You need to log in before you can comment on or make changes to this bug.