Bug 2207748 - [RFE] Red Hat Ceph Storage 5, Prometheus daemon retention size not adjustable
Summary: [RFE] Red Hat Ceph Storage 5, Prometheus daemon retention size not adjustable
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3z4
Assignee: Redouane Kachach Elhichou
QA Contact: Sayalee
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2210690
TreeView+ depends on / blocked
 
Reported: 2023-05-16 19:04 UTC by Sayalee
Modified: 2023-07-19 16:20 UTC (History)
8 users (show)

Fixed In Version: ceph-16.2.10-185.el8cp
Doc Type: Bug Fix
Doc Text:
.Added support to configure `retention.size` parameter in Prometheus's specification file Previously, Cephadm Prometheus's specification would not support configuring `retention.size` parameter. A `ServiceSpec` exception arose whenever the user included this parameter in the specification file. Due to this, the user could not limit the size of Prometheus's data directory. With this fix, users can configure the `retention.size` parameter in Prometheus's specification file. Cephadm passes this value to the Prometheus daemon allowing it to control the disk space usage of Prometheus by limiting the size of the data directory.
Clone Of:
Environment:
Last Closed: 2023-07-19 16:19:11 UTC
Embargoed:
rmandyam: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-6681 0 None None None 2023-05-16 19:05:14 UTC
Red Hat Product Errata RHBA-2023:4213 0 None None None 2023-07-19 16:20:02 UTC

Description Sayalee 2023-05-16 19:04:24 UTC
Description of problem:
=======================
As per the discussion in https://bugzilla.redhat.com/show_bug.cgi?id=2192594#c10 and https://bugzilla.redhat.com/show_bug.cgi?id=2192594#c11, opening this bug to track the fixes to be added for retention.size.


Version-Release number of selected component (if applicable):
=============================================================
RHCS 5.3z3 


How reproducible:
=================
Always


Steps to Reproduce:
-==================
1) Deploy RHCS 5.3z3 cluster
2) Check if the Prometheus module is enabled
3) Check the existing values for flags "storage.tsdb.retention.size" and "storage.tsdb.retention.time" through API and CLI
4) Perform below 2 scenarios:

Scenario(A) :=>
* The existing values of "storage.tsdb.retention.size" and "storage.tsdb.retention.time" is 0 and 15d (default values
* Follow the KCS article https://access.redhat.com/solutions/6969242 as mentioned in #comment12 and edit the /var/lib/ceph/23adb48e-934b-11ed-8ca5-fa163e164f7f/prometheus.ceph-saya-bz-cgpcfb-node2/unit.run file to change the values as per requirement, in this test scenario, it was changed as "storage.tsdb.retention.size":"50MiB" and "storage.tsdb.retention.time":"30d".

Scenario(B) :=>
* Change the existing spec file for Prometheus daemon as mentioned in the provided doc text as follows:

# cat <<EOF | ceph orch apply -i -
service_type: prometheus
service_name: prometheus
placement:
  count: 1
spec:
  retention_time: "1y"
  retention_size: "1GB"
EOF
Scheduled prometheus update...


Actual results:
===============
The fixes added in RHCS 5.3z3 targeted only retention.time and there was a failure when tried to change the retention.size

[ceph: root@ceph-saraut-reten-n9m45s-node1-installer /]# cat <<EOF | ceph orch apply -i -
> service_type: prometheus
> service_name: prometheus
> placement:
>   count: 1
> spec:
>   retention_time: "1y"
>   retention_size: "1GB"
> EOF
Error EINVAL: ServiceSpec: __init__() got an unexpected keyword argument 'retention_size'


Expected results:
================
User should be able to modify the retention.size as well

Comment 2 Scott Ostapovicz 2023-06-14 16:06:22 UTC
Missed the 5.3 z4 deadline.  Moving from z4 to z5.

Comment 11 errata-xmlrpc 2023-07-19 16:19:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4213


Note You need to log in before you can comment on or make changes to this bug.