Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1819690

Summary: Ceilometer doesn't create snapshot resource on cinder snapshot-managed
Product: Red Hat OpenStack Reporter: rohit londhe <rlondhe>
Component: openstack-ceilometerAssignee: Matthias Runge <mrunge>
Status: CLOSED ERRATA QA Contact: Leonid Natapov <lnatapov>
Severity: high Docs Contact:
Priority: high    
Version: 13.0 (Queens)CC: apevec, csibbitt, fwissing, jbadiapa, mrunge, pkilambi
Target Milestone: z13Keywords: Triaged, ZStream
Target Release: 13.0 (Queens)   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-ceilometer-10.0.1-11.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-28 18:31:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description rohit londhe 2020-04-01 10:35:38 UTC
Description of problem:

Migrating Openstack from the old environment to the new environment and we are mirroring rbd images via rbdmirror volume by volume. Incase if the src volume has snapshots, that's also cloned on the target side. 

So after the volume and snapshot are cloned, perform `cinder manage` for volumes and `cinder snapshot-manage` for snapshots.
  - When the volume is attached to the instance, I see that gnocchi resource is created. Until it's attached, there is no gnocchi resource created. 

 - But for snapshot, there is no way, ceilometer gets notified to create the gnocchi resource. We tried to create a clone volume from snapshot but no luck. 

 a simple way to reproduce is, try to create an rbd image and snapshot in the ceph backend and perform cinder manage and cinder snapshot-manage.

So in this case, ceilometer central is not fetching those snapshots or available (state) volumes and creating gnocchi resources.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
provided in the next comment

Actual results:

cinder manage operations are not considered by ceilometer

Expected results:

ceilometer should be able to generate some sort of logs for cinder manage

Additional info:

Comment 35 Matthias Runge 2020-05-22 08:03:35 UTC
Hi, thank you for sharing the description here. However, I still struggle to reproduce. Rohit, if you are able to, please share the environment.

[heat-admin@controller-0 ~]$ ceph status
  cluster:
    id:     47ca8e80-9a76-11ea-91dc-5254008c14ff
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum controller-0
    mgr: controller-0(active)
    osd: 5 osds: 5 up, 5 in
 
  data:
    pools:   6 pools, 163 pgs
    objects: 0 objects, 0B
    usage:   5.02GiB used, 45.0GiB / 50.0GiB avail
    pgs:     163 active+clean


# create an image in pool mypool
rbd -p mypool create  --size 4G cephimage1

# see there is no watcher
rbd -p mypool status cephimage1

# get info on image
rbd -p mypool info cephimage1
rbd image 'cephimage1':
	size 4GiB in 1024 objects
	order 22 (4MiB objects)
	block_name_prefix: rbd_data.3a7d6b8b4567
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	flags: 
	create_timestamp: Fri May 22 07:50:41 2020


However, this image is not manageable by cinder?


(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 manageable-list  hostgroup@tripleo_ceph#tripleo_ceph
+-----------+------+----------------+-----------------+-----------+------------+
| reference | size | safe_to_manage | reason_not_safe | cinder_id | extra_info |
+-----------+------+----------------+-----------------+-----------+------------+
+-----------+------+----------------+-----------------+-----------+------------+
(overcloud) [stack@undercloud-0 ~]$ cinder get-pools
+----------+-------------------------------------+
| Property | Value                               |
+----------+-------------------------------------+
| name     | hostgroup@tripleo_ceph#tripleo_ceph |
+----------+-------------------------------------+


There is still no manageable image in ceph. How would I create that?

Comment 50 Matthias Runge 2020-05-28 14:53:14 UTC
commit 7df749884b21ea760ff3ce07f4b257d2f4a89318 (HEAD -> volume_manage)
Author: Matthias Runge <mrunge>
Date:   Wed May 27 20:02:51 2020 +0200

    Add volume.manage to metrics.
    
    This allows to monitor cinder managed ceph volumes
    to be monitored.
    The second addition here allows the same for snapshots.
    
    Change-Id: I7f045fa618e78351e05ad69bc9580e98487f0c29

diff --git a/ceilometer/data/meters.d/meters.yaml b/ceilometer/data/meters.d/meters.yaml
index 68ab4e00..f06aa9d8 100644
--- a/ceilometer/data/meters.d/meters.yaml
+++ b/ceilometer/data/meters.d/meters.yaml
@@ -120,6 +120,7 @@ metric:
       - 'volume.attach.*'
       - 'volume.detach.*'
       - 'volume.update.*'
+      - 'volume.manage.*'
     type: 'gauge'
     unit: 'GB'
     volume: $.payload.size
@@ -137,6 +138,7 @@ metric:
       - 'snapshot.exists'
       - 'snapshot.create.*'
       - 'snapshot.delete.*'
+      - 'snapshot.manage.*'
     type: 'gauge'
     unit: 'GB'
     volume: $.payload.volume_size
diff --git a/ceilometer/publisher/data/gnocchi_resources.yaml b/ceilometer/publisher/data/gnocchi_resources.yaml
index 656e0c82..e3aafe3b 100644
--- a/ceilometer/publisher/data/gnocchi_resources.yaml
+++ b/ceilometer/publisher/data/gnocchi_resources.yaml
@@ -238,6 +238,10 @@ resources:
       volume.snapshot.size:
       volume.backup.size:
       backup.size:
+      volume.manage_existing.start:
+      volume.manage_existing.end:
+      volume.manage_existing_snapshot.start:
+      volume.manage_existing_snapshot.end:
     attributes:
       display_name: resource_metadata.(display_name|name)
       volume_type: resource_metadata.volume_type
@@ -406,4 +410,5 @@ resources:
       network.services.lb.health_monitor:
       network.services.lb.loadbalancer:
       network.services.lb.total.connections:
-      network.services.lb.active.connections:
\ No newline at end of file
+      network.services.lb.active.connections:
+


This patch allows both managing volumes and snapshots.

Comment 62 Leonid Natapov 2020-10-21 04:52:38 UTC
tested with ceph rdb backend

Comment 68 errata-xmlrpc 2020-10-28 18:31:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (openstack-ceilometer bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4396