Bug 1512937 - [RFE] Duplicated hosts in Grafana (listed by FQDN and IP)
Summary: [RFE] Duplicated hosts in Grafana (listed by FQDN and IP)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-monitoring-integration
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: gowtham
QA Contact: Daniel Horák
URL:
Whiteboard:
: 1515094 1625785 (view as bug list)
Depends On:
Blocks: 1503132
TreeView+ depends on / blocked
 
Reported: 2017-11-14 13:32 UTC by Daniel Horák
Modified: 2018-10-16 11:15 UTC (History)
9 users (show)

Fixed In Version: tendrl-commons-1.6.3-2.el7rhgs tendrl-monitoring-integration-1.6.3-1.el7rhgs
Doc Type: Enhancement
Doc Text:
The host-level dashboard in Grafana listed duplicate hostnames with FQDN and IP addresses for a given storage node irrespective of how the peer probe was done. This caused duplicate data for the same node being displayed in the time series data and the Grafana dashboard. With this fix, the host-level dashboard and other dashboards display the name of the hosts only once which was used for peer probe while gluster cluster creation.
Clone Of:
Environment:
Last Closed: 2018-09-04 06:58:06 UTC
Target Upstream Version:


Attachments (Terms of Use)
Screenshot - duplicated hosts on Tendrl-Gluster-Hosts dasboard (88.61 KB, image/png)
2017-11-14 13:32 UTC, Daniel Horák
no flags Details


Links
System ID Priority Status Summary Last Updated
Github https://github.com/Tendrl monitoring-integration issues 361 None None None 2018-03-05 15:42:52 UTC
Red Hat Bugzilla 1515094 None CLOSED [RFE] tendrl should work with IP address 2019-06-26 07:35:46 UTC
Red Hat Bugzilla 1517077 None CLOSED [RFE] Grafana dashboard not showing all the volume in UP mode when brick path has "short names" 2019-06-26 07:35:46 UTC
Red Hat Bugzilla 1575581 None CLOSED [Doc RFE] Document enhancement made to the Grafana Dashboard 2019-06-26 07:35:46 UTC
Red Hat Product Errata RHSA-2018:2616 None None None 2018-09-04 06:59:03 UTC

Internal Links: 1515094 1517077 1575581

Description Daniel Horák 2017-11-14 13:32:19 UTC
Created attachment 1351945 [details]
Screenshot - duplicated hosts on Tendrl-Gluster-Hosts dasboard

Description of problem:
  When Gluster Trusted Storage Pool is created using IP addresses (not hostnames), Grafana lists each host twice - by hostname and by IP address (see the attachment).

Version-Release number of selected component (if applicable):
  Tendrl Server
  collectd-5.7.0-4.el7rhgs.x86_64
  collectd-ping-5.7.0-4.el7rhgs.x86_64
  grafana-4.3.2-3.el7rhgs.x86_64
  libcollectdclient-5.7.0-4.el7rhgs.x86_64
  tendrl-ansible-1.5.4-1.el7rhgs.noarch
  tendrl-api-1.5.4-2.el7rhgs.noarch
  tendrl-api-httpd-1.5.4-2.el7rhgs.noarch
  tendrl-commons-1.5.4-2.el7rhgs.noarch
  tendrl-grafana-plugins-1.5.4-3.el7rhgs.noarch
  tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
  tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch
  tendrl-node-agent-1.5.4-2.el7rhgs.noarch
  tendrl-notifier-1.5.4-2.el7rhgs.noarch
  tendrl-selinux-1.5.3-2.el7rhgs.noarch
  tendrl-ui-1.5.4-2.el7rhgs.noarch
  
  Gluster Storage Server:
  collectd-5.7.0-4.el7rhgs.x86_64
  collectd-ping-5.7.0-4.el7rhgs.x86_64
  glusterfs-3.8.4-52.el7rhgs.x86_64
  glusterfs-api-3.8.4-52.el7rhgs.x86_64
  glusterfs-cli-3.8.4-52.el7rhgs.x86_64
  glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
  glusterfs-events-3.8.4-52.el7rhgs.x86_64
  glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
  glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64
  glusterfs-libs-3.8.4-52.el7rhgs.x86_64
  glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
  glusterfs-server-3.8.4-52.el7rhgs.x86_64
  gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
  gluster-nagios-common-0.2.4-1.el7rhgs.noarch
  libcollectdclient-5.7.0-4.el7rhgs.x86_64
  libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
  python-gluster-3.8.4-52.el7rhgs.noarch
  tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch
  tendrl-commons-1.5.4-2.el7rhgs.noarch
  tendrl-gluster-integration-1.5.4-2.el7rhgs.noarch
  tendrl-node-agent-1.5.4-2.el7rhgs.noarch
  tendrl-selinux-1.5.3-2.el7rhgs.noarch
  vdsm-gluster-4.17.33-1.2.el7rhgs.noarch


How reproducible:
  100%

Steps to Reproduce:
1. Prepare Gluster Cluster, use IP addresses in gdeploy configuration file (instead of hostnames):
  Example gdeploy configuration file:
    [hosts]
    192.0.2.1
    192.0.2.2
    192.0.2.3
    192.0.2.4
    192.0.2.5
    192.0.2.6

    [peer]
    action=probe
    ignore_peer_errors=no

2. Install and configure Tendrl Server and Tendrl Node Agents.
3. Import Glsuter Cluster into Tendrl.
4. Open Grafana Tendrl-Gluster-Hosts dashboard.

Actual results:
  Host Name selectbox contains each server twice, once listed by Hostname, second time listed by IP.
  (see screenshot)

Expected results:
  Hosts should be listed only once by Hostname.

Additional info:

Comment 1 Nishanth Thomas 2017-11-14 13:44:12 UTC
Using IP Address to peer probe a gluster cluster is not a recommended configuration.
Same with Tendrl as well.You need to use FQDN.

Comment 3 Daniel Horák 2017-11-15 08:23:34 UTC
If I understand it correctly, using IPs for peer probe is not generally recommended, but supported Gluster configuration. So if the decision is not to fix this issue now, we should answer following question:

What is the full impact of this kind of configuration for Tendrl?
If it is only the duplicated visibility of hosts in selectbox on Grafana Tendrl-Gluster-Hosts Dashboard, then we can leave it for now as a known issue and do not limit the supported Gluster configurations for Tendrl.
But if it might cause another unexpected issues, we have to properly and explicitly document, that this particular Gluster configuration is not supported and such cluster should not be imported into Tendrl.

Comment 4 Nishanth Thomas 2017-11-15 13:30:18 UTC
At the moment I propose to explicitly document that this particular Gluster configuration is not supported as we are not recommending this to the customer.

Comment 5 Martin Bukatovic 2018-03-06 11:51:57 UTC
Could you review this BZ, which we discussed on "RHGS WA with RHS One testing" meeting on 2018-03-05 and add here a comment with severity from RHSOne perspective? Thanks a lot for the feedback.

Comment 8 Martin Bukatovic 2018-04-05 10:00:47 UTC
Could you recheck status of this BZ and add FiV if possible to move it to the ON
QE state?

Comment 11 Daniel Horák 2018-04-27 13:18:28 UTC
I've retested this bug with various scenarios consisting from:
* peer probe done using FQDN or IP addresses
* volume(s) created using FQDN or IP addresses

And in all cases, each hosts on various Grafana dashboards (Hosts, Bricks) is
properly listed only once (by FQDN or IP, depending how peer probe was done).

Just small note related to:
(In reply to Daniel Horák from comment #0)
> Expected results:
>   Hosts should be listed only once by Hostname.
The hosts are listed by FQDN/Hostname or IP - depending on how the peer
probe for particular node was done, which seems to be reasonable.

>> VERIFIED

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Version-Release number of selected components:
  Tendrl Server
  # cat /etc/redhat-release 
    Red Hat Enterprise Linux Server release 7.5 (Maipo)
  # rpm -qa | grep -e tendrl -e collectd -e gluster -e etcd | sort
    collectd-5.7.2-3.1.el7rhgs.x86_64
    collectd-ping-5.7.2-3.1.el7rhgs.x86_64
    etcd-3.2.7-1.el7.x86_64
    libcollectdclient-5.7.2-3.1.el7rhgs.x86_64
    python-etcd-0.4.5-2.el7rhgs.noarch
    rubygem-etcd-0.3.0-2.el7rhgs.noarch
    tendrl-ansible-1.6.3-3.el7rhgs.noarch
    tendrl-api-1.6.3-2.el7rhgs.noarch
    tendrl-api-httpd-1.6.3-2.el7rhgs.noarch
    tendrl-commons-1.6.3-3.el7rhgs.noarch
    tendrl-grafana-plugins-1.6.3-1.el7rhgs.noarch
    tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
    tendrl-monitoring-integration-1.6.3-1.el7rhgs.noarch
    tendrl-node-agent-1.6.3-3.el7rhgs.noarch
    tendrl-notifier-1.6.3-2.el7rhgs.noarch
    tendrl-selinux-1.5.4-2.el7rhgs.noarch
    tendrl-ui-1.6.3-1.el7rhgs.noarch
  
  Gluster Storage Server
  # cat /etc/redhat-release 
    Red Hat Enterprise Linux Server release 7.5 (Maipo)
  # cat /etc/redhat-storage-release 
    Red Hat Gluster Storage Server 3.4.0
  # rpm -qa | grep -e tendrl -e collectd -e gluster | sort
    collectd-5.7.2-3.1.el7rhgs.x86_64
    collectd-ping-5.7.2-3.1.el7rhgs.x86_64
    glusterfs-3.12.2-8.el7rhgs.x86_64
    glusterfs-api-3.12.2-8.el7rhgs.x86_64
    glusterfs-cli-3.12.2-8.el7rhgs.x86_64
    glusterfs-client-xlators-3.12.2-8.el7rhgs.x86_64
    glusterfs-events-3.12.2-8.el7rhgs.x86_64
    glusterfs-fuse-3.12.2-8.el7rhgs.x86_64
    glusterfs-geo-replication-3.12.2-8.el7rhgs.x86_64
    glusterfs-libs-3.12.2-8.el7rhgs.x86_64
    glusterfs-rdma-3.12.2-8.el7rhgs.x86_64
    glusterfs-server-3.12.2-8.el7rhgs.x86_64
    gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
    gluster-nagios-common-0.2.4-1.el7rhgs.noarch
    libcollectdclient-5.7.2-3.1.el7rhgs.x86_64
    libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.2.x86_64
    python2-gluster-3.12.2-8.el7rhgs.x86_64
    tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch
    tendrl-commons-1.6.3-3.el7rhgs.noarch
    tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch
    tendrl-node-agent-1.6.3-3.el7rhgs.noarch
    tendrl-selinux-1.5.4-2.el7rhgs.noarch
    vdsm-gluster-4.19.43-2.3.el7rhgs.noarch

Comment 15 Shubhendu Tripathi 2018-09-04 02:46:45 UTC
Boob looks good to me now.

Comment 16 Shubhendu Tripathi 2018-09-04 02:47:55 UTC
I am really sorry about the bad word I meant Bobb(In reply to Shubhendu Tripathi from comment #15)
> Boob looks good to me now.

Comment 18 errata-xmlrpc 2018-09-04 06:58:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616

Comment 19 Anton Mark 2018-09-18 14:18:55 UTC
*** Bug 1625785 has been marked as a duplicate of this bug. ***

Comment 20 Nishanth Thomas 2018-10-16 11:15:08 UTC
*** Bug 1515094 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.