Provide an API to alarms to be created, retrieved, updated and destroyed. Support analogous actions in the python-ceilometerclient libraray and CLI. Upstream blueprint: https://blueprints.launchpad.net/ceilometer/+spec/alarm-api
QANAK'ing due to QE capacity
How To Test =========== 0. Install packstack allinone, then spin up an instance in the usual way. Ensure the compute agent is gathering metrics at a reasonable cadence (every 60s for example instead of every 10mins as per the default): sudo sed -i '/^ *name: cpu_pipeline$/ { n ; s/interval: 600$/interval: 60/ }' /etc/ceilometer/pipeline.yaml sudo service openstack-ceilometer-compute restart 1. Create an alarm with a threshold sufficiently low that it's guaranteed to go into alarm: ceilometer alarm-threshold-create --name cpu_high_bz_988358 --description 'instance running hot' \ --meter-name cpu_util --threshold 0.01 --comparison-operator gt --statistic avg \ --period 60 --evaluation-periods 1 \ --alarm-action 'log://' \ --query resource_id=$INSTANCE_ID ALARM_ID=$(ceilometer alarm-list | grep cpu_high_bz_988358 | sed 's/^| //' | sed 's/ | .*$//') 2. Retrieve the alarm state: ceilometer alarm-show -a $ALARM_ID Ensure the reported attributes match those given on creation in step #1. 3. Ensure it transitions into the alarm state within the evaluation period: sleep 60 ; ceilometer alarm-show -a $ALARM_ID | grep state 4. Update the alarm witha threshold sufficiently high that it's guaranteed to flip out of alarm: ceilometer alarm-update --threshold 99.0 -a $ALARM_ID Ensure the reported attributes match those given on creation in step #1, modulo the updated attribute (the threshold in this case). 5. Ensure it transitions into the ok state within the evaluation period: sleep 60 ; ceilometer alarm-show -a $ALARM_ID | grep state 6. Delete the alarm: ceilometer alarm-delete -a $ALARM_ID 7. Ensure the alarm is no longer reported: ceilometer alarm-list | grep $ALARM_ID ceilometer alarm-show -a $ALARM_ID
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2013-1859.html