Bug 1359840
Summary: | pool utilization alerts | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Lubos Trilety <ltrilety> | ||||
Component: | core | Assignee: | anmol babu <anbabu> | ||||
core sub component: | monitoring | QA Contact: | Martin Kudlej <mkudlej> | ||||
Status: | CLOSED ERRATA | Docs Contact: | |||||
Severity: | unspecified | ||||||
Priority: | unspecified | CC: | anbabu, mbukatov, mkudlej, nthomas, vsarmila | ||||
Version: | 2 | ||||||
Target Milestone: | --- | ||||||
Target Release: | 2 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | rhscon-core-0.0.37-1.el7scon | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2016-08-23 19:58:00 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1353450 | ||||||
Attachments: |
|
Note the pool in scenario should be a standard replicated one with replication set to 4. *** Bug 1360288 has been marked as a duplicate of this bug. *** Tested with ceph-ansible-1.0.5-31.el7scon.noarch ceph-installer-1.0.14-1.el7scon.noarch rhscon-ceph-0.0.39-1.el7scon.x86_64 rhscon-core-0.0.39-1.el7scon.x86_64 rhscon-core-selinux-0.0.39-1.el7scon.noarch rhscon-ui-0.0.51-1.el7scon.noarch and it works. But because of bug 1358267 I was not able to test this with pools in more than one storage profile. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1754 |
Created attachment 1183816 [details] pool list Description of problem: Pool utilization alerts doesn't work as they should. I have a pool which said 10GB available. After I created five 1000MB objects there a new critical alert was generated on usm: Pool utilization for pool default on ceph cluster has moved to CRITICAL After some time it was replaced by: Pool utilization for pool default on ceph cluster has moved to WARNING Even that is not correct as the setting is to have warning if the utilization is over 75% and critical alert if 90% is used. Version-Release number of selected component (if applicable): rhscon-core-0.0.36-1.el7scon.x86_64 rhscon-ui-0.0.50-1.el7scon.noarch rhscon-ceph-0.0.36-1.el7scon.x86_64 rhscon-core-selinux-0.0.36-1.el7scon.noarch How reproducible: 100% Steps to Reproduce: 1. Create some pool (e.g. from 4 10GB osds) 2. Create some objects in the pool (e.g. 5x 1000MB) 3. Actual results: A new critical event is generated on the USM side. After some time it was replaced by warning event. Expected results: No utilization event is created as any threshold is not reached. Additional info: There can be seen the pool utilization and presence of an alert on the screenshot.