Description of problem:
We can configure retention time for samples in PrometheusK8sConfig on OCP311 through guide on .
But upgrade process will remove the value due to not included the cluster-monitoring-operator-config template.
This behavior affects removing unexpected old samples on Prometheus, and if operator allow only default 15 days, it's not reasonable on various modern systems.
Usually most system consider increasing retention time for samples, because 15 days is too short on product system.
Use PrometheusK8sConfig to customize the Prometheus instance used for cluster monitoring.
# retention time for samples.
Version-Release number of selected component (if applicable):
This issue is reported when OCP311 upgrades from v3.11.135 to v3.11.157.
You can always reproduce as follows.
Steps to Reproduce:
1. Configure 'retention: "25d"' to cluster-monitoring-config configmap.
2. Run upgrade playbooks or reinstall cluster-monitoring-operator.
3. The "retention" will be removed completely.
The configured "retention" has been removed, and it will result in removing unexpected old samples.
After upgrade, the configured "retention" is remained as it is.
Configuring "retention" has been already implemented feature, so we should consider this configuration.
openshift_cluster_monitoring_operator_prometheus_retention parameter is added
# rpm -qa | grep ansible
set value for openshift_cluster_monitoring_operator_prometheus_retention and it takes affect, example:
# oc -n openshift-monitoring get pod prometheus-k8s-0 -oyaml | grep -i "storage.tsdb.retention"
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
I would like to re-open this BZ as i have an issue which i believe to be related.
Please do advise if i need to create a new BZ.
This pertains to the retention of the following Prometheus customisations.
infrarole: prometheus <<==There was a requirement to schedule to a dedicated node, post installation.
Said changes were attempted first against statefulset.apps/prometheus-k8s AND then subsequently cm/cluster-monitoring-config when it became apparent that changes/cusomisations were lost.
The CU upgraded from 3.11.286 to Upgrade 3.11.380
I note the creation of the variable openshift_cluster_monitoring_operator_prometheus_retention, but can you advise how best (if possible) to ensure the other customisations are retained.