Hide Forgot
Description of problem: When we changed the Prometheus and Thanos routes to include a path component, we did not change the URLs exposed via the monitoring-shared-config configmap. Some users of these URLs add paths, if they need it. Since we now add a path component to monitoring-shared-config, this results in erroneous URLs and 404 errors.
For QE: This only affects consumers of the public URLs that add a path component to these URLs (like a console development setup). They would end up with https://<host>/api/api/v1/targets before this fix. For QE testing, please ensure that expected URLs still work: <prometheusPublicURL>/api/v1/targets, alertmanagerPublicURL/api/v2/status and <thanosPublicURL>/api/v1/targets.
verified with 4.11.0-0.nightly-2022-03-20-160505, followed the same steps in Comment 4, no issue for {alertmanagerPublicURL}/api/v2/status and {alertmanagerPublicURL}/api/v2/status # oc -n openshift-config-managed get cm monitoring-shared-config -oyaml apiVersion: v1 data: alertmanagerPublicURL: https://alertmanager-main-openshift-monitoring.apps.qe-daily-0322.qe.devcluster.openshift.com prometheusPublicURL: https://prometheus-k8s-openshift-monitoring.apps.qe-daily-0322.qe.devcluster.openshift.com thanosPublicURL: https://thanos-querier-openshift-monitoring.apps.qe-daily-0322.qe.devcluster.openshift.com kind: ConfigMap # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" ${prometheusPublicURL}/api/v1/targets | jq | head { "status": "success", "data": { "activeTargets": [ { "discoveredLabels": { "__address__": "10.129.0.6:8443", "__meta_kubernetes_endpoint_address_target_kind": "Pod", "__meta_kubernetes_endpoint_address_target_name": "openshift-apiserver-operator-6dd7dbdfd8-4kzpp", "__meta_kubernetes_endpoint_node_name": "ip-10-0-53-7", # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" ${alertmanagerPublicURL}/api/v2/status | jq | head { "cluster": { "name": "01FYQD30QXVJ2387GNWYN9KWA7", "peers": [ { "address": "10.129.2.15:9094", "name": "01FYQD30QXVJ2387GNWYN9KWA7" }, { "address": "10.128.2.12:9094", # oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" ${thanosPublicURL}/api/v1/targets | jq { "status": "success", "data": { "activeTargets": [ { "discoveredLabels": { "__address__": "10.0.140.163:10250", "__meta_kubernetes_endpoint_address_target_kind": "Node", "__meta_kubernetes_endpoint_address_target_name": "ip-10-0-140-163.ap-southeast-1.compute.internal", "__meta_kubernetes_endpoint_port_name": "https-metrics", "__meta_kubernetes_endpoint_port_protocol": "TCP", ...
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069