The controller-manager pod does not have the prometheus scrape attribute: $ oc edit pod controller-manager-vzfjb -n kube-service-catalog apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: restricted creationTimestamp: 2018-02-26T07:18:08Z generateName: controller-manager- I don't believe adding the annotation to the daemonset propagates the attribute down to the pod level. I believe that if you add the scrape annotation to the controller manager pod Prometheus will start scrapping it. I'm unable to connect to Prometheus to verify though - I used https://prometheus-openshift-metrics.apps.0226-g87.qe.rhcloud.com/ but I'm unable to successfully authenticate (what userid/password do you use?). Note that the configuration will change (we won't use the annotation any more) once https://github.com/openshift/origin/pull/18694 merges.
(In reply to Jay Boyd from comment #1) > The controller-manager pod does not have the prometheus scrape attribute: > > $ oc edit pod controller-manager-vzfjb -n kube-service-catalog > > apiVersion: v1 > kind: Pod > metadata: > annotations: > openshift.io/scc: restricted > creationTimestamp: 2018-02-26T07:18:08Z > generateName: controller-manager- > > I don't believe adding the annotation to the daemonset propagates the > attribute down to the pod level. That is my mistake, should change daemonset to: spec: revisionHistoryLimit: 10 selector: matchLabels: app: controller-manager template: metadata: annotations: prometheus.io/scrape: "true" And pod will be deployed by: - apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: restricted prometheus.io/scrape: "true" > I believe that if you add the scrape annotation to the controller manager > pod Prometheus will start scrapping it. I'm unable to connect to Prometheus > to verify though - I used > https://prometheus-openshift-metrics.apps.0226-g87.qe.rhcloud.com/ but I'm > unable to successfully authenticate (what userid/password do you use?). I using chezhang/redhat to login Prometheus console(You also can use another use, but need cluster-admin role). I double checked today, bug still cannot get relate metrics in Prometheus console even though added prometheus.io/scrape: "true" to controller-manager pod. > Note that the configuration will change (we won't use the annotation any > more) once https://github.com/openshift/origin/pull/18694 merges. I will double confirm after PR merge.
I tried to review your environment this morning but it looks it was reset. Do you want to debug this or put it on hold until we have the new configuration?
@Jay Thanks your quickly response.
@Jay I noticed PR https://github.com/openshift/origin/pull/18694 is using openshift:master branch(I think master branch is for 3.10 at present), but no any PR for release-3.9 branch. Mar 7 is code freeze date, will this bug be fixed in ocp3.9?
This is the PR for 3.9: https://github.com/openshift/origin/pull/18815 I'm hoping to get it in, but the merge queue is really slow.
This issue was pulled from 3.9 at the last minute. Review needed as this exposes metrics over non-authenticated HTTP.
changing version to 3.9.0 since issue was hunt in 3.9.0 testing.
Jay, How about the status of this bug? Do you still want to provide the fix in 3.9.z? Do you need to change "target release" to 3.10?
Yes, target is 3.10. Ansible Installer: https://github.com/openshift/openshift-ansible/pull/7681 is merged. Cluster Up: https://github.com/openshift/origin/pull/19286
finally merged.
Changing status to ON_QA since image ready for test.
Verified and passed with: # service-catalog --version v3.10.0-0.27.0;Upstream:v0.1.13 Currently can get metrics of service-catalog both in prometheus console and backend.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816