Bug 2112800 - RFE] Red Hat Ceph Storage 5, Prometheus daemon retention time not adjustable
Summary: RFE] Red Hat Ceph Storage 5, Prometheus daemon retention time not adjustable
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.2
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 6.0
Assignee: Adam King
QA Contact: Sayalee
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2126050 2192594
TreeView+ depends on / blocked
 
Reported: 2022-08-01 07:36 UTC by Michaela Lang
Modified: 2023-05-02 12:43 UTC (History)
12 users (show)

Fixed In Version: ceph-17.2.3-35.el9cp
Doc Type: Enhancement
Doc Text:
.Users can now easily set the Prometheus TSDB retention size and time in the Prometheus specification Previously, users could not modify the default 15d retention period and disk consumption from Prometheus. With this release, users can customize these settings through `cephadm` so that they are persistently applied, thereby making it easier for users to specify how much and for how long they would like their Prometheus instances to return data. The format for achieving this is as follows: .Example ---- service_type: prometheus placement: count: 1 spec: retention_time: "1y" retention_size: "1GB" ----
Clone Of:
: 2192594 (view as bug list)
Environment:
Last Closed: 2023-03-20 18:57:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4975 0 None None None 2022-08-01 07:40:22 UTC
Red Hat Knowledge Base (Solution) 6969242 0 None None None 2023-03-14 02:13:49 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:57:55 UTC

Description Michaela Lang 2022-08-01 07:36:24 UTC
Description of problem:
Red Hat Ceph Storage 5 orchestrator deployed Prometheus daemon configuration is set for 15d retention and cannot be modified.

We are requesting a possibility to calculate the expected storage used by Prometheus for variable cluster sizes and retention times.
We are requesting a backport of the feature to modify the default retention period from Prometheus as tracked in 
 - https://tracker.ceph.com/issues/54308
 - https://tracker.ceph.com/issues/56394

Version-Release number of selected component (if applicable):
5.1+


How reproducible:
every time


Steps to Reproduce:
1. deploy Ceph cluster
2. default storage.tsdb.retention.time is set to 15d
3. default storage.tsdb.retention.time get's reported with 0s unless set manually

Actual results:
hard calculate able disk consumption 


Expected results:
modify able retention period and disk consumption

Additional info:
A KCS has been published to mitigate the issue until we have a backport
https://access.redhat.com/solutions/6969242

Comment 26 errata-xmlrpc 2023-03-20 18:57:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.