Bug 988358 - [RFE] Alarm state/lifecycle API
[RFE] Alarm state/lifecycle API
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-ceilometer (Show other bugs)
Unspecified Unspecified
high Severity high
: Upstream M1
: 4.0
Assigned To: Eoghan Glynn
Kevin Whitney
: FutureFeature, OtherQA
Depends On:
Blocks: 973191 RHOS40RFE 986393 986410 1055813
  Show dependency treegraph
Reported: 2013-07-25 07:58 EDT by Eoghan Glynn
Modified: 2014-03-14 01:56 EDT (History)
10 users (show)

See Also:
Fixed In Version: openstack-ceilometer-2013.2-0.2.b1.el6ost
Doc Type: Enhancement
Doc Text:
A public v2 API was extended for OpenStack Metering (Ceilometer), which exposes create, retrieve, update and destroy operations for alarms. This was required because Ceilometer alarms are a user-oriented feature, reflecting the user's view of cloud resources (as opposed to the cloud operator's view of the data center fabric). Hence an alarm's lifecycle must be accessible to normal and admin users.
Story Points: ---
Clone Of:
Last Closed: 2013-12-19 19:15:19 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 27691 None None None Never
OpenStack gerrit 28007 None None None Never
OpenStack gerrit 28008 None None None Never
OpenStack gerrit 28010 None None None Never
OpenStack gerrit 28378 None None None Never

  None (edit)
Description Eoghan Glynn 2013-07-25 07:58:12 EDT
Provide an API to alarms to be created, retrieved, updated and destroyed.

Support analogous actions in the python-ceilometerclient libraray and CLI.

Upstream blueprint: https://blueprints.launchpad.net/ceilometer/+spec/alarm-api
Comment 5 Ami Jeain 2013-10-28 07:43:03 EDT
QANAK'ing due to QE capacity
Comment 8 Eoghan Glynn 2013-12-09 08:57:19 EST
How To Test

0. Install packstack allinone, then spin up an instance in the usual way. 

Ensure the compute agent is gathering metrics at a reasonable cadence (every 60s for example instead of every 10mins as per the default):

  sudo sed -i '/^ *name: cpu_pipeline$/ { n ; s/interval: 600$/interval: 60/ }' /etc/ceilometer/pipeline.yaml
  sudo service openstack-ceilometer-compute restart

1. Create an alarm with a threshold sufficiently low that it's guaranteed to go into alarm:

  ceilometer alarm-threshold-create --name cpu_high_bz_988358 --description 'instance running hot'  \
     --meter-name cpu_util  --threshold 0.01 --comparison-operator gt  --statistic avg \
     --period 60 --evaluation-periods 1 \
     --alarm-action 'log://' \
     --query resource_id=$INSTANCE_ID

  ALARM_ID=$(ceilometer alarm-list | grep cpu_high_bz_988358 | sed 's/^| //' | sed 's/ | .*$//')

2. Retrieve the alarm state:

  ceilometer alarm-show -a $ALARM_ID

  Ensure the reported attributes match those given on creation in step #1.

3. Ensure it transitions into the alarm state within the evaluation period:

   sleep 60 ; ceilometer alarm-show -a $ALARM_ID | grep state

4. Update the alarm witha threshold sufficiently high that it's guaranteed to flip out of alarm: 

  ceilometer alarm-update --threshold 99.0 -a $ALARM_ID

  Ensure the reported attributes match those given on creation in step #1, modulo the updated attribute (the threshold in this case).

5. Ensure it transitions into the ok state within the evaluation period:

   sleep 60 ; ceilometer alarm-show -a $ALARM_ID | grep state

6. Delete the alarm:

  ceilometer alarm-delete -a $ALARM_ID

7. Ensure the alarm is no longer reported:

  ceilometer alarm-list | grep $ALARM_ID
  ceilometer alarm-show -a $ALARM_ID
Comment 12 errata-xmlrpc 2013-12-19 19:15:19 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.