Bug 1359840 - pool utilization alerts
Summary: pool utilization alerts
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: core
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: anmol babu
QA Contact: Martin Kudlej
URL:
Whiteboard:
: 1360288 (view as bug list)
Depends On:
Blocks: Console-2-GA
TreeView+ depends on / blocked
 
Reported: 2016-07-25 14:11 UTC by Lubos Trilety
Modified: 2016-08-23 19:58 UTC (History)
5 users (show)

Fixed In Version: rhscon-core-0.0.37-1.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:58:00 UTC
Target Upstream Version:


Attachments (Terms of Use)
pool list (206.14 KB, image/png)
2016-07-25 14:11 UTC, Lubos Trilety
no flags Details


Links
System ID Priority Status Summary Last Updated
Gerrithub.io 285226 None None None 2016-07-26 07:23:25 UTC
Red Hat Bugzilla 1358267 None None None Never
Red Hat Product Errata RHEA-2016:1754 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Internal Links: 1358267

Description Lubos Trilety 2016-07-25 14:11:16 UTC
Created attachment 1183816 [details]
pool list

Description of problem:
Pool utilization alerts doesn't work as they should. I have a pool which said 10GB available. After I created five 1000MB objects there a new critical alert was generated on usm:
Pool utilization for pool default on ceph cluster has moved to CRITICAL

After some time it was replaced by:
Pool utilization for pool default on ceph cluster has moved to WARNING

Even that is not correct as the setting is to have warning if the utilization is over 75% and critical alert if 90% is used.


Version-Release number of selected component (if applicable):
rhscon-core-0.0.36-1.el7scon.x86_64
rhscon-ui-0.0.50-1.el7scon.noarch
rhscon-ceph-0.0.36-1.el7scon.x86_64
rhscon-core-selinux-0.0.36-1.el7scon.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create some pool (e.g. from 4 10GB osds)
2. Create some objects in the pool (e.g. 5x 1000MB)
3.

Actual results:
A new critical event is generated on the USM side. After some time it was replaced by warning event.

Expected results:
No utilization event is created as any threshold is not reached.

Additional info:
There can be seen the pool utilization and presence of an alert on the screenshot.

Comment 1 Lubos Trilety 2016-07-25 14:20:52 UTC
Note the pool in scenario should be a standard replicated one with replication set to 4.

Comment 2 Nishanth Thomas 2016-07-26 12:22:40 UTC
*** Bug 1360288 has been marked as a duplicate of this bug. ***

Comment 3 Martin Kudlej 2016-08-05 14:16:06 UTC
Tested with
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.39-1.el7scon.x86_64
rhscon-core-0.0.39-1.el7scon.x86_64
rhscon-core-selinux-0.0.39-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch
and it works. But because of bug 1358267 I was not able to test this with pools in more than one storage profile.

Comment 5 errata-xmlrpc 2016-08-23 19:58:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.