Description of problem: Metrics endpoint is not using TLS to encrypt traffic. Version-Release number of selected component (if applicable): 4.4 (possibly also earlier versions) How reproducible: Always Steps to Reproduce: 1. Start a cluster 2. Go to prometheus UI 3. Check connection schema for this component Actual results: Metrics are exposed over HTTP connection Expected results: Metrics are exposed over HTTPS connection Additional info: API server operator ServiceMonitor definition can be used as a template on how to fix this issue: https://github.com/openshift/cluster-openshift-apiserver-operator/blob/master/manifests/0000_90_openshift-apiserver-operator_03_servicemonitor.yaml
https://github.com/openshift/cluster-dns-operator/blob/master/manifests/0000_90_dns-operator_02_servicemonitor.yaml Was this bug generated by some boilerplate process? It refers to "the component" and the reproducer steps seem totally non-specific to the DNS operator.
Sorry for not clarifying. This is about openshift-dns/dns-default component.
Yes, so am I — what exactly leads you to believe that metrics are served insecurely? The CoreDNS pods are exposing a TCP port 9153 serving a TLS endpoint secured by a serving signer service certificate which Prometheus is configured to use.
Created attachment 1667246 [details] prometheus scrape targets - dns section Based on prometheus scrape targets page all DNS endpoints are scraped over HTTP and not HTTPS, which is an insecure channel. Screenshot from 4.3 cluster is attached in this BZ.
Thanks, I see my confusion now — I was looking at the dns operator and not coreDNS itself, which indeed looks misconfigured somehow despite TLS config present throughout the relevant resources. Going to move to 4.5 for now unless someone can justify the blocker status (given this has probably been an issue since 4.1).
After fixing please remove your component from an exclusion list in e2e tests at https://github.com/openshift/origin/blob/master/test/extended/prometheus/prometheus.go#L253-L268
Verified with 4.5.0-0.nightly-2020-04-25-170442 and issue has been fixed. $ oc -n openshift-dns get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dns-default-wp7p6 3/3 Running 0 111m 10.128.0.4 hongli-pl442-mld8x-master-1 <none> <none> <---snip---> Go to Prometheus UI and check the targets as below: https://10.128.0.4:9154/metrics
> After fixing please remove your component from an exclusion list in e2e tests For the record, that was done with this PR: https://github.com/openshift/origin/pull/24904
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409