Hide Forgot
*** Bug 1671193 has been marked as a duplicate of this bug. ***
Ryan, I tried this with a kubectl built from 1.12.4 and it worked. Something must be up with the wiring in oc.
I will look into it today.
metrics-server does not appear to be running within the cluster at `metrics.k8s.io`. ``` kubectl get apiservices | grep metric ```
PR Bug Fix: https://github.com/openshift/origin/pull/21927
*** Bug 1670270 has been marked as a duplicate of this bug. ***
(In reply to Ryan Phillips from comment #6) > metrics-server does not appear to be running within the cluster at > `metrics.k8s.io`. > > ``` > kubectl get apiservices | grep metric > ``` prometheus-adaptor has been replaced by metrics-server $ oc get apiservices v1beta1.metrics.k8s.io -oyaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.alpha.openshift.io/inject-cabundle: "true" creationTimestamp: 2019-02-01T04:53:01Z name: v1beta1.metrics.k8s.io resourceVersion: "14951" selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io uid: 40e8b24f-25dd-11e9-84ca-0a7e0f4b1e0a spec: caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURPRENDQWlDZ0F3SUJBZ0lJYmRXRXdzb3VOREF3RFFZSktvWklodmNOQVFFTEJRQXdKakVTTUJBR0ExVUUKQ3hNSmIzQmxibk5vYVdaME1SQXdEZ1lEVlFRREV3ZHliMjkwTFdOaE1CNFhEVEU1TURJd01UQTBNamN4T1ZvWApEVEk1TURFeU9UQTBNamN5TVZvd0xURVJNQThHQTFVRUN4TUlZbTl2ZEd0MVltVXhHREFXQmdOVkJBTVREM05sCmNuWnBZMlV0YzJWeWRtbHVaekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLY2MKa0Q5UEtNb2tOc1NzelhSaVc2L1lrTjlaUXJCRTB3UHlzeFNjUHpwc2FEaUZWeWhsZmU4bHhhODRCWFNjcTB4YQpvZ3FOR1lZQ3l1SVpDZFdSRkdLb2pEN01FUkFLMWR3U2FRU1FzUDAzUHRqM0RsNXVUZHI5KzJYN3dQNzhkTTFuCklXU0d0anNPV1lQSTNReWV2TzdFbEd6NmcvYm1OMzV6ZG9uaGFTS0NwcXdXWnprY0ljWi8zYTVJQ0ltVTUyQ1YKbS9ydEw5cGIxdlhyZ0dnMGJIK1BjMFVOdFYxblRycUV1R0JwV1FnM1pjU2xEL0Y5Q0pISVY2empNUC9IZFowSwpOZHhJQVhGWVNwVS9iSFo1TUpZVGxJYWFHam9VdW9OSTVBMlFTb0JxUGFUSWh3Uk02dHVDaFNUZVdMbzZVTkh6ClR6UVd6b24rbUpXR2pVTXVTTmtDQXdFQUFhTmpNR0V3RGdZRFZSMFBBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIKL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkttK0t5L1lSL01sTThrdEVRa3hUd3BxRkk1L01COEdBMVVkSXdRWQpNQmFBRkttK0t5L1lSL01sTThrdEVRa3hUd3BxRkk1L01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWh5THBiCkdmckVrQllOY0RVUVI3bW9keXNVK1dBMkZnblJFK3JWbXZ2SkVpcjducnk2Rm1yd1RRVnVSRUJqYmJQWjQvbmcKTkI0OHczdmdzMlEzbExOTnFLd1RiMUFVTk9rOU1OQ1hZVGJpNm4xZUdEaGhHS01WYk1NcERmbTJINkIyL1hyRwpVMHFURzQ4Nmh3Qm0rWXVxczlQTWRiS0IwM1JSSkRvRXdKVE5hZkYvNm1nOXBWM2Q5YnJwdXM0WkUzOFg3L2toCld4VTVvSTVxRlVZR2ZmalQzVEJOdHZIZ0dpTXFNb1puMlpVZU93bmRmZkhENmYvYjFDMTFLNmdsM3gzbEp2eGoKZzJmcWVZOE5qSmp1UmVleiszTlM2WGxlbEJ3N3UwKzJLQW4rcjgyOXN1S014WVdQVEpoNzBmeFljVEk2WW5ucwpTRVBOMmZmVFRCcEswNXliCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K group: metrics.k8s.io groupPriorityMinimum: 100 service: name: prometheus-adapter namespace: openshift-monitoring version: v1beta1 versionPriority: 100 status: conditions: - lastTransitionTime: 2019-02-01T04:53:01Z message: all checks passed reason: Passed status: "True" type: Available
(In reply to Junqi Zhao from comment #9) > (In reply to Ryan Phillips from comment #6) > > metrics-server does not appear to be running within the cluster at > > `metrics.k8s.io`. > > > > ``` > > kubectl get apiservices | grep metric > > ``` > > prometheus-adaptor has been replaced by metrics-server Sorry, should be metrics-server has been replaced by prometheus-adaptor
PR merged and fixed.
# oc adm top node error: metrics not available yet Bug 1674341 is opened, it is a duplicated Bug 1674372, Bug 1674372 is not fixed from # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-02-20-194410 True False 11m Cluster version is 4.0.0-0.nightly-2019-02-20-194410 move this back to MODIFIED
The prometheus-adapter is running in my 4.0 test cluster and logging: ``` EO226 16:22:08.024875 1 reststorage.go:129] unable to fetch node metrics for node "test1-kdmnj-master-0": no metrics known for node E0226 16:22:08.024896 1 reststorage.go:129] unable to fetch node metrics for node "test1-kdmnj-worker-0-c5j8v": no metrics known for node ``` It appears this ticket should be redirected to the monitoring team, because the request from `oc adm top node` appears to be going all the way through.
Talked to Seth, QA needs to create a new bug with the new observations and verification of this bug is gated on this new bug. Junqi: Your report looks like a new regression.
(In reply to Ryan Phillips from comment #15) > Talked to Seth, QA needs to create a new bug with the new observations and > verification of this bug is gated on this new bug. Junqi: Your report looks > like a new regression. "oc adm top node" does't have error "runtime error: invalid memory address or nil pointer dereference" $ oc adm top node error: metrics not available yet Close this one, and re-open bug 1674341
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758