Bug 1524402 - [RFE] extend default volume resource type definition to include instance_id and image_id
Summary: [RFE] extend default volume resource type definition to include instance_id a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-ceilometer
Version: 10.0 (Newton)
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: Upstream M1
: 15.0 (Stein)
Assignee: Eoghan Glynn
QA Contact: Nataf Sharabi
URL:
Whiteboard:
Depends On: 1625912
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-11 12:39 UTC by Luca Miccini
Modified: 2021-02-16 11:00 UTC (History)
6 users (show)

Fixed In Version: openstack-ceilometer-10.0.1-0.20180530162349.1c02e4b.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-21 11:15:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 527050 0 None MERGED cinder: link volume to image and instance 2021-02-16 10:59:42 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:16:08 UTC

Description Luca Miccini 2017-12-11 12:39:13 UTC
Description of problem:

The current volume resource type (/etc/ceilometer/gnocchi_resources.yaml) only collects the following metrics / attributes:

  - resource_type: volume
    metrics:
      - 'volume'
      - 'volume.size'
      - 'volume.create'
      - 'volume.delete'
      - 'volume.update'
      - 'volume.resize'
      - 'volume.attach'
      - 'volume.detach'
    attributes:
      display_name: resource_metadata.display_name

It would be useful to collect instance_id and image_id as additional attributes to allow operators/users to have feature parity (and "visibility parity") between instances booted from images (with ephemeral storage) and instances booted from volumes.

Version-Release number of selected component (if applicable):

python-ceilometer-7.1.1-4.el7ost.noarch
openstack-ceilometer-compute-7.1.1-4.el7ost.noarch
puppet-ceilometer-9.5.0-2.el7ost.noarch
python-ceilometerclient-2.6.2-1.el7ost.noarch
openstack-ceilometer-notification-7.1.1-4.el7ost.noarch
openstack-ceilometer-polling-7.1.1-4.el7ost.noarch
openstack-ceilometer-central-7.1.1-4.el7ost.noarch
openstack-ceilometer-api-7.1.1-4.el7ost.noarch
openstack-ceilometer-collector-7.1.1-4.el7ost.noarch
python-ceilometermiddleware-0.5.2-1.el7ost.noarch
openstack-ceilometer-common-7.1.1-4.el7ost.noarch

How reproducible:

always

Steps to Reproduce:
1. deploy osp10
2. create cinder volume from image, or create an instance from image (into new volume)
3. check output from gnocchi resource show <UUID> --type volume

Actual results:

no instance_id or image_id attributes are present

Expected results:

instance_id and image_id uuid's to be collected and stored as attributes for cinder volumes

Additional info:

Comment 4 Luca Miccini 2017-12-15 17:42:13 UTC
I've been able to collect volume_type and image_id for volumes like the following:

gnocchi resource-type update -a volume_type:string:false volume
gnocchi resource-type update -a image_id:string:false volume

[stack@undercloud-10 ~]$ gnocchi resource-type show  volume
+-------------------------+-----------------------------------------------------------+
| Field                   | Value                                                     |
+-------------------------+-----------------------------------------------------------+
| attributes/display_name | max_length=255, min_length=0, required=False, type=string |
| attributes/image_id     | max_length=255, min_length=0, required=False, type=string |
| attributes/volume_type  | max_length=255, min_length=0, required=False, type=string |
| name                    | volume                                                    |
| state                   | active                                                    |
+-------------------------+-----------------------------------------------------------+


and editing /etc/ceilometer/gnocchi_resources.yaml like:

  - resource_type: volume
    metrics:
      - 'volume'
      - 'volume.size'
      - 'volume.create'
      - 'volume.delete'
      - 'volume.update'
      - 'volume.resize'
      - 'volume.attach'
      - 'volume.detach'
    attributes:
      display_name: resource_metadata.display_name
      volume_type: resource_metadata.volume_type
      image_id: resource_metadata.glance_metadata[?key = "image_id"].value

Comment 5 Mehdi ABAAKOUK 2017-12-18 07:03:23 UTC
Note that we will not support upgrade if the Ceilometer resource-types are not those created by Ceilometer.

Tracking new information always need code change, just hacking resource-types and yaml as workaround is rarely enough.

So, this solution only partially works, only samples built from notification will have image_id set. For samples built by ceilometer-agent-central, the image_id will be empty.

Your change will make the next major OSP upgrade fail for sure.

Comment 7 Mehdi ABAAKOUK 2018-03-26 06:24:27 UTC
This have been implemented upstream and will be part of OSP14 as planned.

Comment 18 Leonid Natapov 2018-12-12 10:15:28 UTC
testing instructions is basically in the description of the bug.

Comment 20 Mehdi ABAAKOUK 2018-12-12 11:55:31 UTC
RFE are usually not backported. Also this feature have some database schema upgrade, that will break upgrade scenario if we backport it.

Comment 22 Leonid Natapov 2019-01-15 12:46:04 UTC
Mehdi,please,provide testing instructions for this RFE. Thanks !

Comment 23 Mehdi ABAAKOUK 2019-01-16 07:05:10 UTC
1. create cinder volume from image, or create an instance from image (into new volume)
2. wait 5 minutes to let Ceilometer gather some data
3. check output from gnocchi resource show <VOLUME_UUID> --type volume

Comment 29 errata-xmlrpc 2019-09-21 11:15:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.