Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1563519 - When gluster-integration goes down or glusterd goes down for few minutes then alert_count for a volumes are initialized by zero
When gluster-integration goes down or glusterd goes down for few minutes then...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-gluster-integration (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: gowtham
Filip Balák
:
Depends On:
Blocks: 1503137
  Show dependency treegraph
 
Reported: 2018-04-04 01:21 EDT by gowtham
Modified: 2018-09-04 03:04 EDT (History)
5 users (show)

See Also:
Fixed In Version: tendrl-node-agent-1.6.3-2.el7rhgs.noarch
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-04 03:03:46 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Github Tendrl/gluster-integration/issues/598 None None None 2018-04-04 01:24 EDT
Github Tendrl/monitoring-integration/issues/373 None None None 2018-04-04 01:25 EDT
Github Tendrl/node-agent/issues/753 None None None 2018-04-04 01:24 EDT
Red Hat Product Errata RHSA-2018:2616 None None None 2018-09-04 03:04 EDT

  None (edit)
Description gowtham 2018-04-04 01:21:09 EDT
Description of problem:

When gluster-integration or glusterd goes down for a few minutes then volume details are deleted by TTL, Alert count for a particular volume is 
maintained inside volume details only, So TTL deletes alert_count details also.
When gluster-integration or glusterd comes up or restarted then alert_count is initialized by zero.

The same case for Node-alert count also, When node-agent goes down for few minutes then node-alert count also deleted by TTL.

Version-Release number of selected component (if applicable):


How reproducible:
For volume alert count:
 Stop glusterd or gluster-integration and start after few minutes, Then you can see volume alert_count is initialized by zero again.

For node-alert count:
 Stop node-agent and start after few minutes, Then alert count will be initialized by zero.

Steps to Reproduce:
1. Volume alert: Stop glusterd and start after few minutes
2. Node alert: Stop node-agent and start after few minutes


Actual results:
Node alert count and column alert do not match with actual alerts.

Expected results:
Node and volume alert count should always match with a number of alerts

Additional info:
Comment 4 Filip Balák 2018-05-25 07:21:00 EDT
Reproduced with old version (tendrl-notifier-1.5.4-6.el7rhgs.noarch) and tested with:
tendrl-ansible-1.6.3-4.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-5.el7rhgs.noarch
tendrl-gluster-integration-1.6.3-3.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-3.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-3.el7rhgs.noarch
tendrl-node-agent-1.6.3-5.el7rhgs.noarch
tendrl-notifier-1.6.3-3.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-2.el7rhgs.noarch

Looks ok. --> VERIFIED
Comment 6 errata-xmlrpc 2018-09-04 03:03:46 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616

Note You need to log in before you can comment on or make changes to this bug.