Created attachment 1528881 [details] the Number of Pods is null for worker node Description of problem: Cloned from https://jira.coreos.com/browse/MON-551 Version-Release number of selected component (if applicable): cluster-monitoring-operator sha256:dab9fb50d49b7f86f365f190051b62e00fa4f8fd95dd14e9e581b8f2a7c40bc3 configmap-reloader sha256:34d864ec23d52c2a7079c27b1f13042aea4c28f87040e16660c6110332b66793 grafana sha256:3c0ddf2f88e070acdd5276d31ef39f7e4dffdb005330cdcb4cdd6992acd27dbe k8s-prometheus-adapter sha256:227479bffec9dca3e3406a3ffef5a01292ab27e4517a7c49569f1c32c9600d42 kube-rbac-proxy sha256:fd602ef255d3bf8a4cdc5ae801fe165e173a6bb0a338310424b80b972bde9f20 kube-state-metrics sha256:e244502d4b00e95f5e68bcfa08b926ced8e874b5afc6a002372f9bd53862a96f prom-label-proxy sha256:8e188e8623daa9bcdadd0b2b815bd7a88c8087891101a62ffbad18618a097404 prometheus sha256:ecfdeea05d7d005e53cbd3ff1bc9c1b543ef14becf88bbba67affef045705037 prometheus-alertmanager sha256:c8a562dc7304a89128d47a852c96406d27c98b9eb7818b89992c022b14b08d6c prometheus-config-reloader sha256:0454a7e3d5bdcdaf77483e20c4776decff3dfa19a41e6b628511635c8c3c2458 prometheus-node-exporter sha256:1e179d8f99f88247bcca8e3c0628d3e5c18878b24c7f0803a72498236694bed1 prometheus-operator sha256:fc5aa7d371096afc4580fc5c5081868c2fcad0ec129229bc23feb54145a23ef8 telemeter sha256:23848400031e83a6f0e6688ac8a5548578a0eada0ef02c0f3aec8ef43260797d How reproducible: Always Steps to Reproduce: 1. login cluster console with admin user and click "Administration -> Nodes", select one node to check the node metrics 2. 3. Actual results: kubelet_running_pod_count only caculates pods on master node Expected results: kubelet_running_pod_count should caculate pods on master and worker nodes Additional info:
Given that we are currently not able to scrape kubelets on worker nodes, this metric is not being collected. Thereby the graph shows zero pods running on that node. I think tracking this in https://bugzilla.redhat.com/show_bug.cgi?id=1674368 is enough. Thanks for the catch @Junqi. *** This bug has been marked as a duplicate of bug 1674368 ***