Labeled perf counters for RGW Op metrics are now split into different sections in the output of `counter dump` for the user operation counters and bucket operation counters.
Current Metrics sent by the ceph-exporter don't include user or bucket on the metric name, for example:
ceph_rgw_op_del_obj_bytes{Bucket="bkt", instance="localhost:9926", instance_id="8000", job="radosgw"}
ceph_rgw_op_del_obj_bytes{User="anonymous", instance="localhost:9926", instance_id="8000", job="radosgw"}
This adds an extra dimension for Prometheus and Grafana to query and filter out the specific data, making it very complex and resource-expensive to configure Grafana dashboards.
The request is to modify the ceph-exporter so it constructs the metrics it sends out to Prometheus using the metric's key. In the future, they could be for anything else, like groups, accounts,etc.
Examples on what the final metric being sent to Prometheus should look like:
-- GLOBAL --
ceph_rgw_op_del_obj_bytes
ceph_rgw_op_del_obj_bytes{instance="localhost:9926", instance_id="8000", job="radosgw"}
-- Bucket operations --
ceph_rgw_op_del_bucket_obj_bytes
ceph_rgw_op_del_bucket_obj_bytes{Bucket="bkt", instance="localhost:9926", instance_id="8000", job="radosgw"}
ceph_rgw_op_del_bucket_obj_bytes{Bucket="bkt2", instance="localhost:9926", instance_id="8000", job="radosgw"}
-- User operations --
ceph_rgw_op_del_user_obj_bytes
ceph_rgw_op_del_user_obj_bytes{User="anonymous", instance="localhost:9926", instance_id="8000", job="radosgw"}
ceph_rgw_op_del_user_obj_bytes{User="test3", instance="localhost:9926", instance_id="8000", job="radosgw"}
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2024:2743
Current Metrics sent by the ceph-exporter don't include user or bucket on the metric name, for example: ceph_rgw_op_del_obj_bytes{Bucket="bkt", instance="localhost:9926", instance_id="8000", job="radosgw"} ceph_rgw_op_del_obj_bytes{User="anonymous", instance="localhost:9926", instance_id="8000", job="radosgw"} This adds an extra dimension for Prometheus and Grafana to query and filter out the specific data, making it very complex and resource-expensive to configure Grafana dashboards. The request is to modify the ceph-exporter so it constructs the metrics it sends out to Prometheus using the metric's key. In the future, they could be for anything else, like groups, accounts,etc. Examples on what the final metric being sent to Prometheus should look like: -- GLOBAL -- ceph_rgw_op_del_obj_bytes ceph_rgw_op_del_obj_bytes{instance="localhost:9926", instance_id="8000", job="radosgw"} -- Bucket operations -- ceph_rgw_op_del_bucket_obj_bytes ceph_rgw_op_del_bucket_obj_bytes{Bucket="bkt", instance="localhost:9926", instance_id="8000", job="radosgw"} ceph_rgw_op_del_bucket_obj_bytes{Bucket="bkt2", instance="localhost:9926", instance_id="8000", job="radosgw"} -- User operations -- ceph_rgw_op_del_user_obj_bytes ceph_rgw_op_del_user_obj_bytes{User="anonymous", instance="localhost:9926", instance_id="8000", job="radosgw"} ceph_rgw_op_del_user_obj_bytes{User="test3", instance="localhost:9926", instance_id="8000", job="radosgw"}