Bug 1509282 - deleted volumes and bricks stay in Grafana
Summary: deleted volumes and bricks stay in Grafana
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-monitoring-integration
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: gowtham
QA Contact: Lubos Trilety
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-03 13:26 UTC by Martin Kudlej
Modified: 2017-12-18 04:39 UTC (History)
4 users (show)

Fixed In Version: tendrl-monitoring-integration-1.5.4-3.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-18 04:39:36 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:3478 normal SHIPPED_LIVE RHGS Web Administration packages 2017-12-18 09:34:49 UTC
Github Tendrl monitoring-integration issues 241 None None None 2017-11-08 09:17:47 UTC

Description Martin Kudlej 2017-11-03 13:26:19 UTC
Description of problem:
If user delete volume or brick they are still visible in Grafana dashboards and user can have false negative impression that there is any problem in cluster.

Version-Release number of selected component (if applicable):
etcd-3.2.7-1.el7.x86_64
glusterfs-3.8.4-18.4.el7.x86_64
glusterfs-3.8.4-50.el7rhgs.x86_64
glusterfs-api-3.8.4-50.el7rhgs.x86_64
glusterfs-cli-3.8.4-50.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-18.4.el7.x86_64
glusterfs-client-xlators-3.8.4-50.el7rhgs.x86_64
glusterfs-events-3.8.4-50.el7rhgs.x86_64
glusterfs-fuse-3.8.4-18.4.el7.x86_64
glusterfs-fuse-3.8.4-50.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-50.el7rhgs.x86_64
glusterfs-libs-3.8.4-18.4.el7.x86_64
glusterfs-libs-3.8.4-50.el7rhgs.x86_64
glusterfs-server-3.8.4-50.el7rhgs.x86_64
mkudlej-usm2-client.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl1.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl2.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl3.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl4.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl5.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl6.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-server.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
python-etcd-0.4.5-1.noarch
rubygem-etcd-0.3.0-1.el7.noarch
tendrl-ansible-1.5.3-2.el7rhgs.noarch
tendrl-api-1.5.3-2.el7rhgs.noarch
tendrl-api-httpd-1.5.3-2.el7rhgs.noarch
tendrl-commons-1.5.3-1.el7rhgs.noarch
tendrl-gluster-integration-1.5.3-2.el7rhgs.noarch
tendrl-grafana-plugins-1.5.3-2.el7rhgs.noarch
tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
tendrl-monitoring-integration-1.5.3-2.el7rhgs.noarch
tendrl-node-agent-1.5.3-3.el7rhgs.noarch
tendrl-notifier-1.5.3-1.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch
tendrl-ui-1.5.3-2.el7rhgs.noarch


How reproducible:
100%

Steps to Reproduce:
1. create snapshot 
2. clone snapshot - new volume is created
3. delete snapshot from 2. - volume is deleted from 2.
4. check dashboards in Grafana for invalid and useless info for deleted volume

Expected results:
There is no info about deleted volumes and bricks in any dashboard.

Comment 1 Petr Penicka 2017-11-08 14:12:07 UTC
Triage Nov 8: agreed to include in 3.3.1.

Comment 3 Lubos Trilety 2017-11-14 14:51:03 UTC
Tested with:
tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch

Comment 4 Martin Kudlej 2017-11-14 15:23:35 UTC
I still see this issue with reproducer described in 1st comment.
etcd-3.2.7-1.el7.x86_64
glusterfs-3.8.4-52.el7_4.x86_64
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-52.el7_4.x86_64
glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
glusterfs-events-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7_4.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64
glusterfs-libs-3.8.4-52.el7_4.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
python-etcd-0.4.5-1.el7rhgs.noarch
python-gluster-3.8.4-52.el7rhgs.noarch
rubygem-etcd-0.3.0-1.el7rhgs.noarch
tendrl-ansible-1.5.4-1.el7rhgs.noarch
tendrl-api-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-2.el7rhgs.noarch
tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch
tendrl-commons-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-2.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-3.el7rhgs.noarch
tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch
tendrl-node-agent-1.5.4-2.el7rhgs.noarch
tendrl-notifier-1.5.4-2.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch
tendrl-ui-1.5.4-2.el7rhgs.noarch
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
-->Assigned

Comment 5 Nishanth Thomas 2017-11-15 08:42:07 UTC
Snapshot management is not supported currently. Tendrl does not understand if snapshot is deleted. snapshot delete might delete the clone volume, but it wont fire the "volume_delete" event which is a must for Tendrl to understand if a volume is deleted. If you delete the volume, gluster will fire "volume_delete" event, which will be processed by tendrl, update the respective data stores and eventually reflected in the dashboard. Please test this scenario and I am flipping the state of the bug back to ON_QA

Comment 6 Martin Kudlej 2017-11-15 11:29:44 UTC
It seems that I've hit different bug 1513414. I've retested this with different cluster and it worked. --> VERIFIED

Comment 8 errata-xmlrpc 2017-12-18 04:39:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3478


Note You need to log in before you can comment on or make changes to this bug.