Description of problem:
The Openshift-Monitoring provided by 3.11 is missing node-specific metrics:
the metrics should be available in Prometheus and subesequently be displayed in the dashboards of grafana.
Version-Release number of selected component (if applicable):
OpenShift Container Platform 3.11
Steps to Reproduce:
will attach the screenshots from grafana
Are the node-exporter targets healthy? (You can see this on the /targets page of the Prometheus UI)
Having the same issue on OCP v3.11.59.
> Are the node-exporter targets healthy? (You can see this on the /targets
> page of the Prometheus UI)
Yes, all are healthy. There are more tan 1.000 metrics available in Prometheus, but only a few node_* metrics:
Could you share the Pod definition of one of those node-exporter Pods as well as sample logs? Are you sure these are Pods from the `openshift-monitoring` namespace? The tech preview had a number of node-exporter collectors turned off, but the new stack should have all of these metrics. What you're seeing might be the node-exporter of the tech-preview stack.
Looks like https://bugzilla.redhat.com/show_bug.cgi?id=1608288 is the reason. Despite the release of https://access.redhat.com/errata/RHBA-2018:2652, our legacy openshift-metrics project was still using port 9100 on all nodes (we upgraded from OCP 3.10.59 to 3.11.59 and performed https://docs.openshift.com/container-platform/3.11/upgrading/automated_upgrades.html#upgrading-cluster-metrics afterwards).
Even after applying https://github.com/openshift/openshift-ansible/pull/9749/commits/d328bebd71c57692024cb693a72e15d0cb8f6676 manually and removing the old DaemonSet for the openshift-metrics project, we have some issues witin the openshift-monitoring project:
- targets for kubernetes-nodes-exporter are down
- node-exporter pods are running except for the Infra Nodes due to port 9101/1936 being used by HAProxy
I will create as support case for this.
Issues has been fixed for us:
- node-exporter pods on Infra Nodes couldn't started properly due to prom/haproxy-exporter pods as part of HAProxy routers (https://docs.openshift.com/container-platform/3.5/install_config/router/default_haproxy_router.html#exposing-the-router-metrics). As these metrics were not used by us, we have deleted these pods (edited deployment configs).
- targets for kubernetes-nodes-exporter were down (except for the node where Prometheus was running) due to missing iptables rule, despite https://bugzilla.redhat.com/show_bug.cgi?id=1563888. Fixed by adding "iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 9000:10000 -j ACCEPT" to all nodes.
MH, is there still something needed?
Created attachment 1539723 [details]
Kubernetes / USE Method / Cluster grafana UI
Created attachment 1539724 [details]
Kubernetes / Compute Resources / Cluster grafana UI
Issue is fixed, see picture in Comment 29 and Comment 30
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.0.0-0.nightly-2019-02-27-213933 True False 80m Cluster version is 4.0.0-0.nightly-2019-02-27-213933
Did not find the 3.11 issue on AWS/GCE
Only one issue on 3.11 openstack, search "node:node_disk_utilisation:avg_irate" and "node:node_disk_saturation:avg_irate" in prometheus will meet error "No datapoints found.", this issue caused "Disk IO Utilisation" and "Disk IO Saturation" in grafana "K8s / USE Method / Cluster" page, and it is tracked in bug 1680517, AWS/GCE don't have this issue.
(In reply to Junqi Zhao from comment #32)
> this issue caused "Disk IO Utilisation" and "Disk IO
> Saturation" in grafana "K8s / USE Method / Cluster" page
this issue caused "No data points" shows for "Disk IO Utilisation" and "Disk IO Saturation" in grafana "K8s / USE Method / Cluster" page on Openstack
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.