Bug 1809204 - Metrics exposed over insecure channel
Summary: Metrics exposed over insecure channel
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.4
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.4.z
Assignee: Aneesh Puttur
QA Contact: Weibin Liang
: 1812508 1821684 (view as bug list)
Depends On: 1817562
Blocks: 1812508
TreeView+ depends on / blocked
Reported: 2020-03-02 15:08 UTC by Pawel Krupa
Modified: 2020-05-04 11:44 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1812508 1817562 (view as bug list)
Last Closed: 2020-05-04 11:43:52 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 594 0 None closed [release-4.4] Bug 1809204: Configure tls for multus metrics endpoint 2020-07-28 11:11:15 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:44:13 UTC

Description Pawel Krupa 2020-03-02 15:08:13 UTC
Description of problem:
Metrics endpoint for monitor-multus-admission-controller is not using TLS to encrypt traffic.

Version-Release number of selected component (if applicable):
4.4 (possibly also earlier versions)

How reproducible:

Steps to Reproduce:
1. Start a cluster
2. Go to prometheus UI
3. Check connection schema for this component

Actual results:
Metrics are exposed over HTTP connection

Expected results:
Metrics are exposed over HTTPS connection

Additional info:
API server operator ServiceMonitor definition can be used as a template on how to fix this issue: https://github.com/openshift/cluster-openshift-apiserver-operator/blob/master/manifests/0000_90_openshift-apiserver-operator_03_servicemonitor.yaml

Comment 1 Ben Bennett 2020-03-04 14:11:00 UTC
This same issue was opened across many components, but at least for the router, the bug was spurious.  Can we validate that we are exposing over TLS and update this bug please.

Comment 2 Pawel Krupa 2020-03-04 16:57:11 UTC
Yes, it was opened for multiple components as multiple components have the same issue. To be precise this one is about openshift-multus/monitor-multus-admission-controller

Comment 3 Douglas Smith 2020-03-10 14:08:52 UTC
I have my associate Aneesh Puttur currently assessing this, I believe he's identified the root cause, and we'll target getting a fix in 4.5 and we'll backport to 4.4.z

Comment 6 Pawel Krupa 2020-04-06 11:42:24 UTC
After fixing please remove your component from an exclusion list in e2e tests at https://github.com/openshift/origin/blob/master/test/extended/prometheus/prometheus.go#L253-L268

Comment 7 Ben Bennett 2020-04-08 13:02:24 UTC
*** Bug 1821684 has been marked as a duplicate of this bug. ***

Comment 10 Aneesh Puttur 2020-04-17 17:51:29 UTC
*** Bug 1812508 has been marked as a duplicate of this bug. ***

Comment 13 zhaozhanqi 2020-04-20 10:40:06 UTC
Verified this bug on 4.4.0-0.nightly-2020-04-20-044802

#token=`oc -n openshift-monitoring sa get-token prometheus-k8s`

#oc -n openshift-monitoring exec -c prometheus prometheus-k8s-1  -- curl -k -H "Authorization: Bearer $token" -k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1202  100  1202    0     0   8854      0 --:--:-- --:--:-- --:--:--  8903
# HELP network_attachment_definition_enabled_instance_up Metric to identify clusters with network attachment definition enabled instances.
# TYPE network_attachment_definition_enabled_instance_up gauge
network_attachment_definition_enabled_instance_up{networks="any"} 1
network_attachment_definition_enabled_instance_up{networks="sriov"} 0
# HELP network_attachment_definition_instances Metric to get number of instance using network attachment definition in the cluster.
# TYPE network_attachment_definition_instances gauge
network_attachment_definition_instances{networks="any"} 2
network_attachment_definition_instances{networks="macvlan"} 2
network_attachment_definition_instances{networks="sriov"} 0
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 281
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

Comment 15 errata-xmlrpc 2020-05-04 11:43:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.