Bug 1563519

Summary: When gluster-integration goes down or glusterd goes down for few minutes then alert_count for a volumes are initialized by zero
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: gowtham <gshanmug>
Component: web-admin-tendrl-gluster-integrationAssignee: gowtham <gshanmug>
Status: CLOSED ERRATA QA Contact: Filip Balák <fbalak>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: amukherj, fbalak, mbukatov, nthomas, rhs-bugs
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tendrl-node-agent-1.6.3-2.el7rhgs.noarch Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-04 07:03:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1503137    

Description gowtham 2018-04-04 05:21:09 UTC
Description of problem:

When gluster-integration or glusterd goes down for a few minutes then volume details are deleted by TTL, Alert count for a particular volume is 
maintained inside volume details only, So TTL deletes alert_count details also.
When gluster-integration or glusterd comes up or restarted then alert_count is initialized by zero.

The same case for Node-alert count also, When node-agent goes down for few minutes then node-alert count also deleted by TTL.

Version-Release number of selected component (if applicable):


How reproducible:
For volume alert count:
 Stop glusterd or gluster-integration and start after few minutes, Then you can see volume alert_count is initialized by zero again.

For node-alert count:
 Stop node-agent and start after few minutes, Then alert count will be initialized by zero.

Steps to Reproduce:
1. Volume alert: Stop glusterd and start after few minutes
2. Node alert: Stop node-agent and start after few minutes


Actual results:
Node alert count and column alert do not match with actual alerts.

Expected results:
Node and volume alert count should always match with a number of alerts

Additional info:

Comment 4 Filip Balák 2018-05-25 11:21:00 UTC
Reproduced with old version (tendrl-notifier-1.5.4-6.el7rhgs.noarch) and tested with:
tendrl-ansible-1.6.3-4.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-5.el7rhgs.noarch
tendrl-gluster-integration-1.6.3-3.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-3.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-3.el7rhgs.noarch
tendrl-node-agent-1.6.3-5.el7rhgs.noarch
tendrl-notifier-1.6.3-3.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-2.el7rhgs.noarch

Looks ok. --> VERIFIED

Comment 6 errata-xmlrpc 2018-09-04 07:03:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616