Bug 1954994 - should update to 2.26.0 for prometheus resources label
Summary: should update to 2.26.0 for prometheus resources label
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.8
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.8.0
Assignee: Jan Fajerski
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-29 08:42 UTC by Junqi Zhao
Modified: 2021-07-27 23:05 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 23:04:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 1127 0 None open Bug 1946865: Update kube prometheus and related assets 2021-04-29 09:40:12 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 23:05:16 UTC

Description Junqi Zhao 2021-04-29 08:42:41 UTC
Description of problem:
https://bugzilla.redhat.com/show_bug.cgi?id=1931281
prometheus version had been bumped to 2.26.0, but the prometheus pods' label is still app.kubernetes.io/version=2.24.0, we also need to update other resources, example: prometheuses/prometheusrules/servicemonitors/clusterrolebindings/clusterroles

# token=`oc sa get-token prometheus-k8s -n openshift-monitoring`
# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=prometheus_build_info' | jq
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "prometheus_build_info",
          "branch": "rhaos-4.8-rhel-8",
          "container": "kube-rbac-proxy",
          "endpoint": "metrics",
          "goversion": "go1.16.1",
          "instance": "10.128.2.34:9091",
          "job": "prometheus-user-workload",
          "namespace": "openshift-user-workload-monitoring",
          "pod": "prometheus-user-workload-1",
          "revision": "04af2897149a56fb1b44fca47493740aa888b7e7",
          "service": "prometheus-user-workload",
          "version": "2.26.0"
        },
...


# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-04-29-063720   True        False         73m     Cluster version is 4.8.0-0.nightly-2021-04-29-063720
# oc -n openshift-monitoring get pod --show-labels | grep prometheus-k8s
prometheus-k8s-0                               7/7     Running   1          65m   app.kubernetes.io/component=prometheus,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.24.0,app=prometheus,controller-revision-hash=prometheus-k8s-588f669d48,operator.prometheus.io/name=k8s,operator.prometheus.io/shard=0,prometheus=k8s,statefulset.kubernetes.io/pod-name=prometheus-k8s-0
prometheus-k8s-1                               7/7     Running   1          71m   app.kubernetes.io/component=prometheus,app.kubernetes.io/managed-by=cluster-monitoring-operator,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.24.0,app=prometheus,controller-revision-hash=prometheus-k8s-588f669d48,operator.prometheus.io/name=k8s,operator.prometheus.io/shard=0,prometheus=k8s,statefulset.kubernetes.io/pod-name=prometheus-k8s-1

Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-04-29-063720

How reproducible:
always

Steps to Reproduce:
1. see the description
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Junqi Zhao 2021-05-06 03:37:08 UTC
tested with 4.8.0-0.nightly-2021-05-05-030749, prometheus resources label is 2.26.0, same for other resources, prometheuses/prometheusrules/servicemonitors/service/clusterrolebindings/clusterroles

# oc -n openshift-monitoring get pod --show-labels | grep prometheus-k8s
prometheus-k8s-0                               7/7     Running   1          21h    app.kubernetes.io/component=prometheus,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.26.0,app=prometheus,controller-revision-hash=prometheus-k8s-94cb99bcc,operator.prometheus.io/name=k8s,operator.prometheus.io/shard=0,prometheus=k8s,statefulset.kubernetes.io/pod-name=prometheus-k8s-0
prometheus-k8s-1                               7/7     Running   1          21h    app.kubernetes.io/component=prometheus,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=openshift-monitoring,app.kubernetes.io/version=2.26.0,app=prometheus,controller-revision-hash=prometheus-k8s-94cb99bcc,operator.prometheus.io/name=k8s,operator.prometheus.io/shard=0,prometheus=k8s,statefulset.kubernetes.io/pod-name=prometheus-k8s-1

Comment 6 errata-xmlrpc 2021-07-27 23:04:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.