Bug 2112800
Summary: | RFE] Red Hat Ceph Storage 5, Prometheus daemon retention time not adjustable | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Michaela Lang <milang> | |
Component: | Cephadm | Assignee: | Adam King <adking> | |
Status: | CLOSED ERRATA | QA Contact: | Sayalee <saraut> | |
Severity: | medium | Docs Contact: | Masauso Lungu <mlungu> | |
Priority: | unspecified | |||
Version: | 5.2 | CC: | adking, cephqe-warriors, gjose, lithomas, milverma, mlungu, mmuench, prprakas, saraut, sostapov, vdas, vereddy | |
Target Milestone: | --- | |||
Target Release: | 6.0 | |||
Hardware: | All | |||
OS: | All | |||
Whiteboard: | ||||
Fixed In Version: | ceph-17.2.3-35.el9cp | Doc Type: | Enhancement | |
Doc Text: |
.Users can now easily set the Prometheus TSDB retention size and time in the Prometheus specification
Previously, users could not modify the default 15d retention period and disk consumption from Prometheus.
With this release, users can customize these settings through `cephadm` so that they are persistently applied, thereby making it easier for users to specify how much and for how long they would like their Prometheus instances to return data.
The format for achieving this is as follows:
.Example
----
service_type: prometheus
placement:
count: 1
spec:
retention_time: "1y"
retention_size: "1GB"
----
|
Story Points: | --- | |
Clone Of: | ||||
: | 2192594 (view as bug list) | Environment: | ||
Last Closed: | 2023-03-20 18:57:13 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2126050, 2192594 |
Description
Michaela Lang
2022-08-01 07:36:24 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360 |