Red Hat Bugzilla – Bug 1261046
[RFE] Gather Total RAM Buffered and Total Cached Memory SNMP metrics
Last modified: 2016-04-07 17:07:14 EDT
Add Total RAM Buffered ("126.96.36.199.4.1.2021.4.14.0") and Total Cached Memory ("188.8.131.52.4.1.2021.4.15.0") to the set of SNMP metrics that are polled for.
Testing this requires a machine running snmpd. It can be the same machine on which you are doing testing or can be an instance you create. Quite some time ago I wrote up this: https://tank.peermore.com/tanks/cdent-rhat/SnmpWithCeilometer
which provides some nitty gritty on how to get things set up.
If you're using an install that is already configured to do snmp metering, some of these steps can be skipped and you can just do the "did I get some meters" section at the end.
Once you have confirmed (with an snmpwalk) that snmp is even functioning you can test that it is being polled by:
* If you are not using an instance, configuring a manual resource in pipeline.yaml:
- name: meter_snmp
"hizzouse" should be replaced by the snmp community, 192.168.2.2 with the IP of the machine on which you have configured sndmpd (see the link above for more details).
* Restart the polling agent that is polling the central namepsace.
* Wait the polling interval and then check for samples:
In the polling agent log you will see entries like:
Polling pollster hardware.memory.buffer in the context of meter_snmp
Polling pollster hardware.memory.cached in the context of meter_snmp
If you do a search for samples for the IP of the snmpd running host:
$ ceilometer sample-list -q resource_id=192.168.2.2
you will see a variety of hardware metrics, including 'hardware.memory.buffer' and 'hardware.memory.cached'.
If you are testing using an instance and the instance was discoverd, then you'll need to query by resource_id.
 I find using an instance less easy as you have to configure it for snmp and snmp access and if you're going to do that may as well just do it on the machine you're currently working on.
Verified according test plan.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.