Bug 1659362 - kubelet targets are down
Summary: kubelet targets are down
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.1.z
Assignee: Frederic Branczyk
QA Contact: Junqi Zhao
URL:
Whiteboard:
: 1659361 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-14 08:09 UTC by Junqi Zhao
Modified: 2019-09-25 07:28 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-25 07:27:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
kubelet targets are down (379.78 KB, image/png)
2018-12-14 08:09 UTC, Junqi Zhao
no flags Details
kubelet targets are up (161.87 KB, image/png)
2018-12-20 07:44 UTC, Junqi Zhao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2820 0 None None None 2019-09-25 07:28:02 UTC

Description Junqi Zhao 2018-12-14 08:09:10 UTC
Created attachment 1514307 [details]
kubelet targets are down

Description of problem:
This bug is cloned from https://jira.coreos.com/browse/MON-498
File it again for QE team to track the monitoring issue in Bugzilla.

Deploy cluster monitoring on AWS with new installer

kubelet targets are down in /targets UI, this cause some metrics can not be shown on grafana UI, see the attached grafana_pod_resouce.png

Error in /targets UI

x509:certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "root-ca")

Version-Release number of selected component (if applicable):
docker.io/grafana/grafana:5.2.4
docker.io/openshift/oauth-proxy:v1.1.0
docker.io/openshift/prometheus-alertmanager:v0.15.2
docker.io/openshift/prometheus-node-exporter:v0.16.0
docker.io/openshift/prometheus:v2.5.0
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/k8s-prometheus-adapter-amd64:v0.4.0
quay.io/coreos/kube-rbac-proxy:v0.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
quay.io/coreos/prom-label-proxy:v0.1.0
quay.io/coreos/prometheus-config-reloader:v0.26.0
quay.io/coreos/prometheus-operator:v0.26.0
quay.io/openshift/origin-configmap-reload:v3.11
quay.io/openshift/origin-telemeter:v4.0
registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-14-003614@sha256:4b96754e4e429971ff85304a54a4f354fa644cef0e14d3aca0f18bdaad1e45d2

How reproducible:
Always

Steps to Reproduce:
1. Deploy cluster monitoring on AWS with new installer and check targets on /targets page
2.
3.

Actual results:
kubelet targets are down

Expected results:
kubelet targets should be up

Additional info:

Comment 1 Junqi Zhao 2018-12-17 06:26:20 UTC
*** Bug 1659361 has been marked as a duplicate of this bug. ***

Comment 2 Junqi Zhao 2018-12-20 07:43:31 UTC
Issue is fixed with

docker.io/grafana/grafana:5.2.4
docker.io/openshift/oauth-proxy:v1.1.0
docker.io/openshift/prometheus-alertmanager:v0.15.2
docker.io/openshift/prometheus-node-exporter:v0.16.0
docker.io/openshift/prometheus:v2.5.0
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/k8s-prometheus-adapter-amd64:v0.4.1
quay.io/coreos/kube-rbac-proxy:v0.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
quay.io/coreos/prometheus-config-reloader:v0.26.0
quay.io/coreos/prometheus-operator:v0.26.0
quay.io/openshift/origin-configmap-reload:v3.11
quay.io/openshift/origin-telemeter:v4.0
registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-19-004812@sha256:9ea6b852e7cdf29529d1a8d56cde5eacd8f40e9d2b806d0984e24a4c27df5567

Comment 3 Junqi Zhao 2018-12-20 07:44:00 UTC
Created attachment 1515810 [details]
kubelet targets are up

Comment 5 errata-xmlrpc 2019-09-25 07:27:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2820


Note You need to log in before you can comment on or make changes to this bug.