Bug 2065076

Summary: Access monitoring Routes based on monitoring-shared-config creates wrong URL
Product: OpenShift Container Platform Reporter: Jan Fajerski <jfajersk>
Component: MonitoringAssignee: Jan Fajerski <jfajersk>
Status: CLOSED ERRATA QA Contact: Junqi Zhao <juzhao>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.10CC: amuller, anpicker, aos-bugs, juzhao, juzhou, rkshirsa
Target Milestone: ---   
Target Release: 4.11.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-10 10:54:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Jan Fajerski 2022-03-17 10:08:48 UTC
Description of problem:
When we changed the Prometheus and Thanos routes to include a path component, we did not change the URLs exposed via the monitoring-shared-config configmap. Some users of these URLs add paths, if they need it. Since we now add a path component to monitoring-shared-config, this results in erroneous URLs and 404 errors.

Comment 3 Jan Fajerski 2022-03-22 11:47:48 UTC
For QE:
This only affects consumers of the public URLs that add a path component to these URLs (like a console development setup). They would end up with https://<host>/api/api/v1/targets before this fix.
For QE testing, please ensure that expected URLs still work: <prometheusPublicURL>/api/v1/targets, alertmanagerPublicURL/api/v2/status and <thanosPublicURL>/api/v1/targets.

Comment 5 Junqi Zhao 2022-03-22 12:16:54 UTC
verified with 4.11.0-0.nightly-2022-03-20-160505, followed the same steps in Comment 4, no issue for {alertmanagerPublicURL}/api/v2/status and {alertmanagerPublicURL}/api/v2/status
# oc -n openshift-config-managed get cm monitoring-shared-config -oyaml
apiVersion: v1
  alertmanagerPublicURL: https://alertmanager-main-openshift-monitoring.apps.qe-daily-0322.qe.devcluster.openshift.com
  prometheusPublicURL: https://prometheus-k8s-openshift-monitoring.apps.qe-daily-0322.qe.devcluster.openshift.com
  thanosPublicURL: https://thanos-querier-openshift-monitoring.apps.qe-daily-0322.qe.devcluster.openshift.com
kind: ConfigMap

# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" ${prometheusPublicURL}/api/v1/targets | jq | head
  "status": "success",
  "data": {
    "activeTargets": [
        "discoveredLabels": {
          "__address__": "",
          "__meta_kubernetes_endpoint_address_target_kind": "Pod",
          "__meta_kubernetes_endpoint_address_target_name": "openshift-apiserver-operator-6dd7dbdfd8-4kzpp",
          "__meta_kubernetes_endpoint_node_name": "ip-10-0-53-7",

# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" ${alertmanagerPublicURL}/api/v2/status | jq | head
  "cluster": {
    "name": "01FYQD30QXVJ2387GNWYN9KWA7",
    "peers": [
        "address": "",
        "name": "01FYQD30QXVJ2387GNWYN9KWA7"
        "address": "",

# oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" ${thanosPublicURL}/api/v1/targets | jq
  "status": "success",
  "data": {
    "activeTargets": [
        "discoveredLabels": {
          "__address__": "",
          "__meta_kubernetes_endpoint_address_target_kind": "Node",
          "__meta_kubernetes_endpoint_address_target_name": "ip-10-0-140-163.ap-southeast-1.compute.internal",
          "__meta_kubernetes_endpoint_port_name": "https-metrics",
          "__meta_kubernetes_endpoint_port_protocol": "TCP",

Comment 11 errata-xmlrpc 2022-08-10 10:54:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.