Bug 1509282

Summary: deleted volumes and bricks stay in Grafana
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Martin Kudlej <mkudlej>
Component: web-admin-tendrl-monitoring-integrationAssignee: gowtham <gshanmug>
Status: CLOSED ERRATA QA Contact: Lubos Trilety <ltrilety>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: ltrilety, nthomas, ppenicka, sankarshan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tendrl-monitoring-integration-1.5.4-3.el7rhgs Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-18 04:39:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Martin Kudlej 2017-11-03 13:26:19 UTC
Description of problem:
If user delete volume or brick they are still visible in Grafana dashboards and user can have false negative impression that there is any problem in cluster.

Version-Release number of selected component (if applicable):
etcd-3.2.7-1.el7.x86_64
glusterfs-3.8.4-18.4.el7.x86_64
glusterfs-3.8.4-50.el7rhgs.x86_64
glusterfs-api-3.8.4-50.el7rhgs.x86_64
glusterfs-cli-3.8.4-50.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-18.4.el7.x86_64
glusterfs-client-xlators-3.8.4-50.el7rhgs.x86_64
glusterfs-events-3.8.4-50.el7rhgs.x86_64
glusterfs-fuse-3.8.4-18.4.el7.x86_64
glusterfs-fuse-3.8.4-50.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-50.el7rhgs.x86_64
glusterfs-libs-3.8.4-18.4.el7.x86_64
glusterfs-libs-3.8.4-50.el7rhgs.x86_64
glusterfs-server-3.8.4-50.el7rhgs.x86_64
mkudlej-usm2-client.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl1.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl2.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl3.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl4.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl5.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-gl6.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
mkudlej-usm2-server.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >>
python-etcd-0.4.5-1.noarch
rubygem-etcd-0.3.0-1.el7.noarch
tendrl-ansible-1.5.3-2.el7rhgs.noarch
tendrl-api-1.5.3-2.el7rhgs.noarch
tendrl-api-httpd-1.5.3-2.el7rhgs.noarch
tendrl-commons-1.5.3-1.el7rhgs.noarch
tendrl-gluster-integration-1.5.3-2.el7rhgs.noarch
tendrl-grafana-plugins-1.5.3-2.el7rhgs.noarch
tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
tendrl-monitoring-integration-1.5.3-2.el7rhgs.noarch
tendrl-node-agent-1.5.3-3.el7rhgs.noarch
tendrl-notifier-1.5.3-1.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch
tendrl-ui-1.5.3-2.el7rhgs.noarch


How reproducible:
100%

Steps to Reproduce:
1. create snapshot 
2. clone snapshot - new volume is created
3. delete snapshot from 2. - volume is deleted from 2.
4. check dashboards in Grafana for invalid and useless info for deleted volume

Expected results:
There is no info about deleted volumes and bricks in any dashboard.

Comment 1 Petr Penicka 2017-11-08 14:12:07 UTC
Triage Nov 8: agreed to include in 3.3.1.

Comment 3 Lubos Trilety 2017-11-14 14:51:03 UTC
Tested with:
tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch

Comment 4 Martin Kudlej 2017-11-14 15:23:35 UTC
I still see this issue with reproducer described in 1st comment.
etcd-3.2.7-1.el7.x86_64
glusterfs-3.8.4-52.el7_4.x86_64
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-52.el7_4.x86_64
glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
glusterfs-events-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7_4.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64
glusterfs-libs-3.8.4-52.el7_4.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
python-etcd-0.4.5-1.el7rhgs.noarch
python-gluster-3.8.4-52.el7rhgs.noarch
rubygem-etcd-0.3.0-1.el7rhgs.noarch
tendrl-ansible-1.5.4-1.el7rhgs.noarch
tendrl-api-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-2.el7rhgs.noarch
tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch
tendrl-commons-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-2.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-3.el7rhgs.noarch
tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch
tendrl-node-agent-1.5.4-2.el7rhgs.noarch
tendrl-notifier-1.5.4-2.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch
tendrl-ui-1.5.4-2.el7rhgs.noarch
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
-->Assigned

Comment 5 Nishanth Thomas 2017-11-15 08:42:07 UTC
Snapshot management is not supported currently. Tendrl does not understand if snapshot is deleted. snapshot delete might delete the clone volume, but it wont fire the "volume_delete" event which is a must for Tendrl to understand if a volume is deleted. If you delete the volume, gluster will fire "volume_delete" event, which will be processed by tendrl, update the respective data stores and eventually reflected in the dashboard. Please test this scenario and I am flipping the state of the bug back to ON_QA

Comment 6 Martin Kudlej 2017-11-15 11:29:44 UTC
It seems that I've hit different bug 1513414. I've retested this with different cluster and it worked. --> VERIFIED

Comment 8 errata-xmlrpc 2017-12-18 04:39:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3478