Bug 1819690
| Summary: | Ceilometer doesn't create snapshot resource on cinder snapshot-managed | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | rohit londhe <rlondhe> |
| Component: | openstack-ceilometer | Assignee: | Matthias Runge <mrunge> |
| Status: | CLOSED ERRATA | QA Contact: | Leonid Natapov <lnatapov> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 13.0 (Queens) | CC: | apevec, csibbitt, fwissing, jbadiapa, mrunge, pkilambi |
| Target Milestone: | z13 | Keywords: | Triaged, ZStream |
| Target Release: | 13.0 (Queens) | ||
| Hardware: | Unspecified | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | openstack-ceilometer-10.0.1-11.el7ost | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-10-28 18:31:03 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
rohit londhe
2020-04-01 10:35:38 UTC
Hi, thank you for sharing the description here. However, I still struggle to reproduce. Rohit, if you are able to, please share the environment.
[heat-admin@controller-0 ~]$ ceph status
cluster:
id: 47ca8e80-9a76-11ea-91dc-5254008c14ff
health: HEALTH_OK
services:
mon: 1 daemons, quorum controller-0
mgr: controller-0(active)
osd: 5 osds: 5 up, 5 in
data:
pools: 6 pools, 163 pgs
objects: 0 objects, 0B
usage: 5.02GiB used, 45.0GiB / 50.0GiB avail
pgs: 163 active+clean
# create an image in pool mypool
rbd -p mypool create --size 4G cephimage1
# see there is no watcher
rbd -p mypool status cephimage1
# get info on image
rbd -p mypool info cephimage1
rbd image 'cephimage1':
size 4GiB in 1024 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.3a7d6b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Fri May 22 07:50:41 2020
However, this image is not manageable by cinder?
(overcloud) [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.8 manageable-list hostgroup@tripleo_ceph#tripleo_ceph
+-----------+------+----------------+-----------------+-----------+------------+
| reference | size | safe_to_manage | reason_not_safe | cinder_id | extra_info |
+-----------+------+----------------+-----------------+-----------+------------+
+-----------+------+----------------+-----------------+-----------+------------+
(overcloud) [stack@undercloud-0 ~]$ cinder get-pools
+----------+-------------------------------------+
| Property | Value |
+----------+-------------------------------------+
| name | hostgroup@tripleo_ceph#tripleo_ceph |
+----------+-------------------------------------+
There is still no manageable image in ceph. How would I create that?
commit 7df749884b21ea760ff3ce07f4b257d2f4a89318 (HEAD -> volume_manage)
Author: Matthias Runge <mrunge>
Date: Wed May 27 20:02:51 2020 +0200
Add volume.manage to metrics.
This allows to monitor cinder managed ceph volumes
to be monitored.
The second addition here allows the same for snapshots.
Change-Id: I7f045fa618e78351e05ad69bc9580e98487f0c29
diff --git a/ceilometer/data/meters.d/meters.yaml b/ceilometer/data/meters.d/meters.yaml
index 68ab4e00..f06aa9d8 100644
--- a/ceilometer/data/meters.d/meters.yaml
+++ b/ceilometer/data/meters.d/meters.yaml
@@ -120,6 +120,7 @@ metric:
- 'volume.attach.*'
- 'volume.detach.*'
- 'volume.update.*'
+ - 'volume.manage.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.size
@@ -137,6 +138,7 @@ metric:
- 'snapshot.exists'
- 'snapshot.create.*'
- 'snapshot.delete.*'
+ - 'snapshot.manage.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.volume_size
diff --git a/ceilometer/publisher/data/gnocchi_resources.yaml b/ceilometer/publisher/data/gnocchi_resources.yaml
index 656e0c82..e3aafe3b 100644
--- a/ceilometer/publisher/data/gnocchi_resources.yaml
+++ b/ceilometer/publisher/data/gnocchi_resources.yaml
@@ -238,6 +238,10 @@ resources:
volume.snapshot.size:
volume.backup.size:
backup.size:
+ volume.manage_existing.start:
+ volume.manage_existing.end:
+ volume.manage_existing_snapshot.start:
+ volume.manage_existing_snapshot.end:
attributes:
display_name: resource_metadata.(display_name|name)
volume_type: resource_metadata.volume_type
@@ -406,4 +410,5 @@ resources:
network.services.lb.health_monitor:
network.services.lb.loadbalancer:
network.services.lb.total.connections:
- network.services.lb.active.connections:
\ No newline at end of file
+ network.services.lb.active.connections:
+
This patch allows both managing volumes and snapshots.
tested with ceph rdb backend Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (openstack-ceilometer bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4396 |