Bug 1655841 - telemeter-client pod is not created
Summary: telemeter-client pod is not created
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.1.0
Assignee: lserven
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-04 04:19 UTC by Junqi Zhao
Modified: 2019-06-04 10:41 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:41:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:41:10 UTC

Description Junqi Zhao 2018-12-04 04:19:52 UTC
Description of problem:
This bug is cloned from https://jira.coreos.com/browse/MON-485
File it again for QE team to track the monitoring issue in Bugzilla.


Tested on libvirt with the new installer, telemeter-client pod is not created.
$oc -n openshift-monitoring get pod
NAME                                          READY     STATUS    RESTARTS   AGE
alertmanager-main-0                           3/3       Running   0          2h
alertmanager-main-1                           3/3       Running   0          2h
alertmanager-main-2                           3/3       Running   0          2h
cluster-monitoring-operator-789bdb899-w59t6   1/1       Running   1          2h
grafana-58456d859d-2j5w9                      2/2       Running   5          2h
kube-state-metrics-dcf7dc56d-mdj9r            3/3       Running   0          2h
node-exporter-btx9d                           2/2       Running   0          2h
node-exporter-kqdkh                           2/2       Running   0          2h
node-exporter-ttj2s                           2/2       Running   0          2h
prometheus-k8s-0                              6/6       Running   5          2h
prometheus-k8s-1                              6/6       Running   1          2h
prometheus-operator-77fbf5d5d6-wgn66          1/1       Running   0          2h
$oc -n openshift-monitoring get cm
NAME                                        DATA      AGE
cluster-monitoring-config                   1         3h
grafana-dashboard-k8s-cluster-rsrc-use      1         2h
grafana-dashboard-k8s-node-rsrc-use         1         2h
grafana-dashboard-k8s-resources-cluster     1         2h
grafana-dashboard-k8s-resources-namespace   1         2h
grafana-dashboard-k8s-resources-pod         1         2h
grafana-dashboards                          1         2h
prometheus-k8s-rulefiles-0                  1         2h
serving-certs-ca-bundle                     1         2h
sharing-config                              3         2h
$oc -n openshift-monitoring get deploy
NAME                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
cluster-monitoring-operator   1         1         1            1           3h
grafana                       1         1         1            1           2h
kube-state-metrics            1         1         1            1           2h
prometheus-operator           1         1         1            1           2h


Version-Release number of selected component (if applicable):
docker.io/grafana/grafana:5.2.4
docker.io/openshift/oauth-proxy:v1.1.0
docker.io/openshift/prometheus-alertmanager:v0.15.2
docker.io/openshift/prometheus-node-exporter:v0.16.0
docker.io/openshift/prometheus:v2.5.0
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/kube-rbac-proxy:v0.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
quay.io/coreos/prom-label-proxy:v0.1.0
quay.io/coreos/prometheus-config-reloader:v0.26.0
quay.io/coreos/prometheus-operator:v0.26.0
registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-03-233731@sha256:20393c3ce270834bfe261c1eaabea8947732240f6f1235c39eecb5ffa05d1835


How reproducible:
Always

Steps to Reproduce:
1. Deploy cluster monitoring on libvirt with the new installer
2.
3.

Actual results:
telemeter-client pod is not created

Expected results:
telemeter-client pod should be created

Additional info:

Comment 1 Junqi Zhao 2018-12-04 04:20:58 UTC
Blocks telemeter testing

Comment 2 Junqi Zhao 2018-12-13 07:50:10 UTC
Issue is fixed, could see telemeter-client pod now

$ oc get pod -n openshift-monitoring | grep telemeter-client
telemeter-client-747b776f55-pffrs              3/3       Running   0          2h

images
quay.io/openshift/origin-telemeter:v4.0
quay.io/openshift/origin-configmap-reload:v3.11
quay.io/coreos/kube-rbac-proxy:v0.4.0

Comment 3 Junqi Zhao 2019-01-07 11:04:14 UTC
Issue is reproduced again. re-open it.

# oc -n openshift-monitoring  get deployment | grep telemeter-client

nothing returned and the following resources are not created.

# oc -n openshift-monitoring get pod | grep telemeter-client
# oc -n openshift-monitoring get configmap | grep telemeter-client-serving-certs-ca-bundle
# oc -n openshift-monitoring  get secret | grep telemeter-client
# oc -n openshift-monitoring  get ServiceMonitor | grep telemeter-client
# oc -n openshift-monitoring  get service | grep telemeter-client

images used

docker.io/grafana/grafana:5.2.4
docker.io/openshift/oauth-proxy:v1.1.0
docker.io/openshift/prometheus-alertmanager:v0.15.2
docker.io/openshift/prometheus-node-exporter:v0.16.0
docker.io/openshift/prometheus:v2.5.0
docker.io/openshift/prometheus:v2.5.0
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/kube-rbac-proxy:v0.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
quay.io/coreos/prom-label-proxy:v0.1.0
quay.io/coreos/prometheus-config-reloader:v0.25.0
quay.io/coreos/prometheus-operator:v0.25.0
quay.io/openshift/origin-cluster-monitoring-operator:v4.0

Comment 4 Frederic Branczyk 2019-01-15 18:06:23 UTC
@Junqi I believe you closed this on Jira today. Can we close this here?

Comment 5 Junqi Zhao 2019-01-16 01:52:05 UTC
Issue is fixed with
# oc get clusterversion version -oyaml | grep payload
payload: quay.io/openshift-release-dev/ocp-release@sha256:66cee7428ba0d3cb983bd2a437e576b2289e7fd5abafa70256200a5408b26644

cluster-monitoring-operator pod images:
image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b88121a3c893297c1be75261bae142b6312227c72bc72a0a64c68363a96601f
$oc -n openshift-monitoring get pod | grep telemeter-client
telemeter-client-69478fc49f-z88h2              3/3       Running   0          2h
$oc -n openshift-monitoring get configmap | grep telemeter-client-serving-certs-ca-bundle
telemeter-client-serving-certs-ca-bundle    1         2h
$oc -n openshift-monitoring  get secret | grep telemeter-client
telemeter-client                              Opaque                                6         2h
telemeter-client-dockercfg-h65st              kubernetes.io/dockercfg               1         2h
telemeter-client-tls                          kubernetes.io/tls                     2         2h
telemeter-client-token-jj7jv                  kubernetes.io/service-account-token   3         2h
telemeter-client-token-n7vtw                  kubernetes.io/service-account-token   3         2h
$oc -n openshift-monitoring  get ServiceMonitor | grep telemeter-client
telemeter-client              2h
$oc -n openshift-monitoring  get service | grep telemeter-client
telemeter-client              ClusterIP   None             <none>        8443/TCP            2h

Comment 8 errata-xmlrpc 2019-06-04 10:41:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.