Description of problem: If user delete volume or brick they are still visible in Grafana dashboards and user can have false negative impression that there is any problem in cluster. Version-Release number of selected component (if applicable): etcd-3.2.7-1.el7.x86_64 glusterfs-3.8.4-18.4.el7.x86_64 glusterfs-3.8.4-50.el7rhgs.x86_64 glusterfs-api-3.8.4-50.el7rhgs.x86_64 glusterfs-cli-3.8.4-50.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-18.4.el7.x86_64 glusterfs-client-xlators-3.8.4-50.el7rhgs.x86_64 glusterfs-events-3.8.4-50.el7rhgs.x86_64 glusterfs-fuse-3.8.4-18.4.el7.x86_64 glusterfs-fuse-3.8.4-50.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-50.el7rhgs.x86_64 glusterfs-libs-3.8.4-18.4.el7.x86_64 glusterfs-libs-3.8.4-50.el7rhgs.x86_64 glusterfs-server-3.8.4-50.el7rhgs.x86_64 mkudlej-usm2-client.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> mkudlej-usm2-gl1.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> mkudlej-usm2-gl2.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> mkudlej-usm2-gl3.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> mkudlej-usm2-gl4.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> mkudlej-usm2-gl5.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> mkudlej-usm2-gl6.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> mkudlej-usm2-server.usmqe.lab.eng.blr.redhat.com | SUCCESS | rc=0 >> python-etcd-0.4.5-1.noarch rubygem-etcd-0.3.0-1.el7.noarch tendrl-ansible-1.5.3-2.el7rhgs.noarch tendrl-api-1.5.3-2.el7rhgs.noarch tendrl-api-httpd-1.5.3-2.el7rhgs.noarch tendrl-commons-1.5.3-1.el7rhgs.noarch tendrl-gluster-integration-1.5.3-2.el7rhgs.noarch tendrl-grafana-plugins-1.5.3-2.el7rhgs.noarch tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch tendrl-monitoring-integration-1.5.3-2.el7rhgs.noarch tendrl-node-agent-1.5.3-3.el7rhgs.noarch tendrl-notifier-1.5.3-1.el7rhgs.noarch tendrl-selinux-1.5.3-2.el7rhgs.noarch tendrl-ui-1.5.3-2.el7rhgs.noarch How reproducible: 100% Steps to Reproduce: 1. create snapshot 2. clone snapshot - new volume is created 3. delete snapshot from 2. - volume is deleted from 2. 4. check dashboards in Grafana for invalid and useless info for deleted volume Expected results: There is no info about deleted volumes and bricks in any dashboard.
Triage Nov 8: agreed to include in 3.3.1.
Tested with: tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch
I still see this issue with reproducer described in 1st comment. etcd-3.2.7-1.el7.x86_64 glusterfs-3.8.4-52.el7_4.x86_64 glusterfs-3.8.4-52.el7rhgs.x86_64 glusterfs-api-3.8.4-52.el7rhgs.x86_64 glusterfs-cli-3.8.4-52.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-52.el7_4.x86_64 glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64 glusterfs-events-3.8.4-52.el7rhgs.x86_64 glusterfs-fuse-3.8.4-52.el7_4.x86_64 glusterfs-fuse-3.8.4-52.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64 glusterfs-libs-3.8.4-52.el7_4.x86_64 glusterfs-libs-3.8.4-52.el7rhgs.x86_64 glusterfs-rdma-3.8.4-52.el7rhgs.x86_64 glusterfs-server-3.8.4-52.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64 python-etcd-0.4.5-1.el7rhgs.noarch python-gluster-3.8.4-52.el7rhgs.noarch rubygem-etcd-0.3.0-1.el7rhgs.noarch tendrl-ansible-1.5.4-1.el7rhgs.noarch tendrl-api-1.5.4-2.el7rhgs.noarch tendrl-api-httpd-1.5.4-2.el7rhgs.noarch tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch tendrl-commons-1.5.4-2.el7rhgs.noarch tendrl-gluster-integration-1.5.4-2.el7rhgs.noarch tendrl-grafana-plugins-1.5.4-3.el7rhgs.noarch tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch tendrl-node-agent-1.5.4-2.el7rhgs.noarch tendrl-notifier-1.5.4-2.el7rhgs.noarch tendrl-selinux-1.5.3-2.el7rhgs.noarch tendrl-ui-1.5.4-2.el7rhgs.noarch vdsm-gluster-4.17.33-1.2.el7rhgs.noarch -->Assigned
Snapshot management is not supported currently. Tendrl does not understand if snapshot is deleted. snapshot delete might delete the clone volume, but it wont fire the "volume_delete" event which is a must for Tendrl to understand if a volume is deleted. If you delete the volume, gluster will fire "volume_delete" event, which will be processed by tendrl, update the respective data stores and eventually reflected in the dashboard. Please test this scenario and I am flipping the state of the bug back to ON_QA
It seems that I've hit different bug 1513414. I've retested this with different cluster and it worked. --> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3478