Description of problem: The current volume resource type (/etc/ceilometer/gnocchi_resources.yaml) only collects the following metrics / attributes: - resource_type: volume metrics: - 'volume' - 'volume.size' - 'volume.create' - 'volume.delete' - 'volume.update' - 'volume.resize' - 'volume.attach' - 'volume.detach' attributes: display_name: resource_metadata.display_name It would be useful to collect instance_id and image_id as additional attributes to allow operators/users to have feature parity (and "visibility parity") between instances booted from images (with ephemeral storage) and instances booted from volumes. Version-Release number of selected component (if applicable): python-ceilometer-7.1.1-4.el7ost.noarch openstack-ceilometer-compute-7.1.1-4.el7ost.noarch puppet-ceilometer-9.5.0-2.el7ost.noarch python-ceilometerclient-2.6.2-1.el7ost.noarch openstack-ceilometer-notification-7.1.1-4.el7ost.noarch openstack-ceilometer-polling-7.1.1-4.el7ost.noarch openstack-ceilometer-central-7.1.1-4.el7ost.noarch openstack-ceilometer-api-7.1.1-4.el7ost.noarch openstack-ceilometer-collector-7.1.1-4.el7ost.noarch python-ceilometermiddleware-0.5.2-1.el7ost.noarch openstack-ceilometer-common-7.1.1-4.el7ost.noarch How reproducible: always Steps to Reproduce: 1. deploy osp10 2. create cinder volume from image, or create an instance from image (into new volume) 3. check output from gnocchi resource show <UUID> --type volume Actual results: no instance_id or image_id attributes are present Expected results: instance_id and image_id uuid's to be collected and stored as attributes for cinder volumes Additional info:
I've been able to collect volume_type and image_id for volumes like the following: gnocchi resource-type update -a volume_type:string:false volume gnocchi resource-type update -a image_id:string:false volume [stack@undercloud-10 ~]$ gnocchi resource-type show volume +-------------------------+-----------------------------------------------------------+ | Field | Value | +-------------------------+-----------------------------------------------------------+ | attributes/display_name | max_length=255, min_length=0, required=False, type=string | | attributes/image_id | max_length=255, min_length=0, required=False, type=string | | attributes/volume_type | max_length=255, min_length=0, required=False, type=string | | name | volume | | state | active | +-------------------------+-----------------------------------------------------------+ and editing /etc/ceilometer/gnocchi_resources.yaml like: - resource_type: volume metrics: - 'volume' - 'volume.size' - 'volume.create' - 'volume.delete' - 'volume.update' - 'volume.resize' - 'volume.attach' - 'volume.detach' attributes: display_name: resource_metadata.display_name volume_type: resource_metadata.volume_type image_id: resource_metadata.glance_metadata[?key = "image_id"].value
Note that we will not support upgrade if the Ceilometer resource-types are not those created by Ceilometer. Tracking new information always need code change, just hacking resource-types and yaml as workaround is rarely enough. So, this solution only partially works, only samples built from notification will have image_id set. For samples built by ceilometer-agent-central, the image_id will be empty. Your change will make the next major OSP upgrade fail for sure.
This have been implemented upstream and will be part of OSP14 as planned.
testing instructions is basically in the description of the bug.
RFE are usually not backported. Also this feature have some database schema upgrade, that will break upgrade scenario if we backport it.
Mehdi,please,provide testing instructions for this RFE. Thanks !
1. create cinder volume from image, or create an instance from image (into new volume) 2. wait 5 minutes to let Ceilometer gather some data 3. check output from gnocchi resource show <VOLUME_UUID> --type volume
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:2811