Description of problem: Metrics endpoint for monitor-multus-admission-controller is not using TLS to encrypt traffic. Version-Release number of selected component (if applicable): 4.4 (possibly also earlier versions) How reproducible: Always Steps to Reproduce: 1. Start a cluster 2. Go to prometheus UI 3. Check connection schema for this component Actual results: Metrics are exposed over HTTP connection Expected results: Metrics are exposed over HTTPS connection Additional info: API server operator ServiceMonitor definition can be used as a template on how to fix this issue: https://github.com/openshift/cluster-openshift-apiserver-operator/blob/master/manifests/0000_90_openshift-apiserver-operator_03_servicemonitor.yaml
This same issue was opened across many components, but at least for the router, the bug was spurious. Can we validate that we are exposing over TLS and update this bug please.
Yes, it was opened for multiple components as multiple components have the same issue. To be precise this one is about openshift-multus/monitor-multus-admission-controller
I have my associate Aneesh Puttur currently assessing this, I believe he's identified the root cause, and we'll target getting a fix in 4.5 and we'll backport to 4.4.z
After fixing please remove your component from an exclusion list in e2e tests at https://github.com/openshift/origin/blob/master/test/extended/prometheus/prometheus.go#L253-L268
*** Bug 1821684 has been marked as a duplicate of this bug. ***
*** Bug 1812508 has been marked as a duplicate of this bug. ***
Verified this bug on 4.4.0-0.nightly-2020-04-20-044802 #token=`oc -n openshift-monitoring sa get-token prometheus-k8s` #oc -n openshift-monitoring exec -c prometheus prometheus-k8s-1 -- curl -k -H "Authorization: Bearer $token" https://10.128.0.5:8443/metrics -k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1202 100 1202 0 0 8854 0 --:--:-- --:--:-- --:--:-- 8903 # HELP network_attachment_definition_enabled_instance_up Metric to identify clusters with network attachment definition enabled instances. # TYPE network_attachment_definition_enabled_instance_up gauge network_attachment_definition_enabled_instance_up{networks="any"} 1 network_attachment_definition_enabled_instance_up{networks="sriov"} 0 # HELP network_attachment_definition_instances Metric to get number of instance using network attachment definition in the cluster. # TYPE network_attachment_definition_instances gauge network_attachment_definition_instances{networks="any"} 2 network_attachment_definition_instances{networks="macvlan"} 2 network_attachment_definition_instances{networks="sriov"} 0 # HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served. # TYPE promhttp_metric_handler_requests_in_flight gauge promhttp_metric_handler_requests_in_flight 1 # HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code. # TYPE promhttp_metric_handler_requests_total counter promhttp_metric_handler_requests_total{code="200"} 281 promhttp_metric_handler_requests_total{code="500"} 0 promhttp_metric_handler_requests_total{code="503"} 0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581