Bug 1563519 - When gluster-integration goes down or glusterd goes down for few minutes then alert_count for a volumes are initialized by zero
Summary: When gluster-integration goes down or glusterd goes down for few minutes then...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-gluster-integration
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: gowtham
QA Contact: Filip Balák
URL:
Whiteboard:
Depends On:
Blocks: 1503137
TreeView+ depends on / blocked
 
Reported: 2018-04-04 05:21 UTC by gowtham
Modified: 2018-09-04 07:04 UTC (History)
5 users (show)

Fixed In Version: tendrl-node-agent-1.6.3-2.el7rhgs.noarch
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 07:03:46 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github Tendrl gluster-integration issues 598 0 None None None 2018-04-04 05:24:26 UTC
Github Tendrl monitoring-integration issues 373 0 None None None 2018-04-04 05:25:55 UTC
Github Tendrl node-agent issues 753 0 None None None 2018-04-04 05:24:04 UTC
Red Hat Product Errata RHSA-2018:2616 0 None None None 2018-09-04 07:04:50 UTC

Description gowtham 2018-04-04 05:21:09 UTC
Description of problem:

When gluster-integration or glusterd goes down for a few minutes then volume details are deleted by TTL, Alert count for a particular volume is 
maintained inside volume details only, So TTL deletes alert_count details also.
When gluster-integration or glusterd comes up or restarted then alert_count is initialized by zero.

The same case for Node-alert count also, When node-agent goes down for few minutes then node-alert count also deleted by TTL.

Version-Release number of selected component (if applicable):


How reproducible:
For volume alert count:
 Stop glusterd or gluster-integration and start after few minutes, Then you can see volume alert_count is initialized by zero again.

For node-alert count:
 Stop node-agent and start after few minutes, Then alert count will be initialized by zero.

Steps to Reproduce:
1. Volume alert: Stop glusterd and start after few minutes
2. Node alert: Stop node-agent and start after few minutes


Actual results:
Node alert count and column alert do not match with actual alerts.

Expected results:
Node and volume alert count should always match with a number of alerts

Additional info:

Comment 4 Filip Balák 2018-05-25 11:21:00 UTC
Reproduced with old version (tendrl-notifier-1.5.4-6.el7rhgs.noarch) and tested with:
tendrl-ansible-1.6.3-4.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-5.el7rhgs.noarch
tendrl-gluster-integration-1.6.3-3.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-3.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-3.el7rhgs.noarch
tendrl-node-agent-1.6.3-5.el7rhgs.noarch
tendrl-notifier-1.6.3-3.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-2.el7rhgs.noarch

Looks ok. --> VERIFIED

Comment 6 errata-xmlrpc 2018-09-04 07:03:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616


Note You need to log in before you can comment on or make changes to this bug.