Bug 1852767

Summary: prometheus.rentention does not take effect for UWM prometheus-user-workload pod
Product: OpenShift Container Platform Reporter: Junqi Zhao <juzhao>
Component: MonitoringAssignee: Lili Cosic <lcosic>
Status: CLOSED ERRATA QA Contact: Junqi Zhao <juzhao>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.6CC: alegrand, anpicker, erooth, kakkoyun, lcosic, mloibl, pkrupa, surbania
Target Milestone: ---Keywords: UpcomingSprint
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-27 16:11:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
prometheus crd file none

Description Junqi Zhao 2020-07-01 09:10:23 UTC
Description of problem:
eanbled User Workload Monitoring and created user-workload-monitoring-config configmap to set prometheus.retention as 48h, but retention time is still 15d for prometheus-user-workload pod.
Note: default retention time for statefulset prometheus-k8s under openshift-monitoring is 15d
# kubectl -n openshift-user-workload-monitoring get cm user-workload-monitoring-config -oyaml
apiVersion: v1
data:
  config.yaml: |
    prometheus:
      retention: 48h
kind: ConfigMap
metadata:
  creationTimestamp: "2020-07-01T08:26:31Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:config.yaml: {}
    manager: oc
    operation: Update
    time: "2020-07-01T08:26:31Z"
  name: user-workload-monitoring-config
  namespace: openshift-user-workload-monitoring

# for i in $(kubectl -n openshift-user-workload-monitoring get sts --no-headers | awk '{print $1}'); do echo $i; kubectl -n openshift-user-workload-monitoring get sts $i -oyaml | grep -i retention -A1; done
prometheus-user-workload
        - --storage.tsdb.retention.time=15d
        - --web.enable-lifecycle
thanos-ruler-user-workload
        - --tsdb.retention=24h
        - --label=thanos_ruler_replica="$(POD_NAME)"

#  for i in $(kubectl -n openshift-user-workload-monitoring get pod | grep prometheus-user-workload | awk '{print $1}'); do echo $i; kubectl -n openshift-user-workload-monitoring get pod  $i -oyaml | grep -i retention -A1; done
prometheus-user-workload-0
    - --storage.tsdb.retention.time=15d
    - --web.enable-lifecycle
prometheus-user-workload-1
    - --storage.tsdb.retention.time=15d
    - --web.enable-lifecycle

# kubectl -n openshift-monitoring get sts prometheus-k8s  -oyaml | grep -i retention -A1
        - --storage.tsdb.retention.time=15d
        - --web.enable-lifecycle



Version-Release number of selected component (if applicable):
4.6.0-0.nightly-2020-06-30-000342

How reproducible:
always

Steps to Reproduce:
1. see the description
2.
3.

Actual results:


Expected results:


Additional info:

Comment 6 Junqi Zhao 2020-07-01 11:55:25 UTC
Created attachment 1699477 [details]
prometheus crd file

Comment 8 Junqi Zhao 2020-07-01 12:07:32 UTC
# oc -n openshift-user-workload-monitoring get prometheus user-workload -oyaml | grep retention
  retention: 15d

Comment 10 Lili Cosic 2020-07-30 14:46:55 UTC
https://github.com/openshift/cluster-monitoring-operator/pull/839 Pr merged, forgot to link it.

Comment 12 Junqi Zhao 2020-08-03 08:49:39 UTC
Tested with 4.6.0-0.nightly-2020-08-02-091622, issue is fixed, verify steps see from Comment 0
# oc -n openshift-user-workload-monitoring get prometheus/user-workload -oyaml | grep -i retention -A1
  retention: 48h
  ruleNamespaceSelector: {}
# for i in $(oc -n openshift-user-workload-monitoring get pod | grep prometheus-user-workload | awk '{print $1}'); do echo $i; oc -n openshift-user-workload-monitoring get pod  $i -oyaml | grep -i retention -A1; done
prometheus-user-workload-0
    - --storage.tsdb.retention.time=48h
    - --web.enable-lifecycle
prometheus-user-workload-1
    - --storage.tsdb.retention.time=48h
    - --web.enable-lifecycle
# oc -n openshift-user-workload-monitoring get sts/prometheus-user-workload -oyaml | grep -i retention -A1
        - --storage.tsdb.retention.time=48h
        - --web.enable-lifecycle

Comment 15 errata-xmlrpc 2020-10-27 16:11:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196