Hide Forgot
Description of problem: When setup env with prometheus enabled. The networking related metrics cannot be found in prometheus console. Version-Release number of selected component (if applicable): v3.10.0-0.63.0 How reproducible: always Steps to Reproduce: 1. Setup ocp cluster with prometheus enabled 2. Login to the prometheus console to check the sdn related metrics 3. Actual results: Cannot find the openshift sdn related items Expected results: Should be able to capture the networking related metrics by prometheus. Additional info:
The following metrics should be shown in prometheus console. openshift_sdn_ovs_flows openshift_sdn_arp_cache_entries openshift_sdn_pod_ips openshift_sdn_pod_setup_errors openshift_sdn_pod_setup_latency openshift_sdn_pod_teardown_errors openshift_sdn_pod_teardown_latency
This is presumably not a regression so moving to 3.11. Reassigning to the metrics team since I assume they are in charge of hooking up metrics.
Correct, this is not a regression, it's simply not configured. Assigning this to Casey for the SDN team to configure metrics collection for 3.11.
Where did the list of metrics in comment 1 come from? Were these metrics previously exposed via hawkular?
(In reply to Casey Callendrello from comment #4) > Where did the list of metrics in comment 1 come from? Were these metrics > previously exposed via hawkular? Here is a list which were defined for networking related metrics, and I designed test case based on this https://github.com/openshift/origin/blob/master/pkg/network/node/metrics.go#L19 I am not sure if they work with hawkular, we do not have case about networking metrics via hawkular.
Great, thanks for the info. Assigning to dcbw, who wrote that code.
Since this is specifically monitoring of the networking stack I'm moving this to their BZ component for tracking.
Any progress on this? I can still see this problem on v3.11.0
Casey, I'd thought I did everything needed to expose those, but now that the SDN is a daemonset perhaps more is required? Any idea what's needed to actually push the metrics out now?
The question is: where's the problem? Is the daemonst correctly answering on the metrics endpoint? Is Prometheus configured to scrape that endpoint? Should be easy enough to answer the first question. I'll also ask the monitoring people if they can answer the second.
The prometheus related services are running in the openshift-metrics project, and the sdn related services are running in the openshift-sdn project. Is it able to read the info in openshift-sdn from openshift-metrics?
Figured this one out. We need to: 1) Decide on a port for the sdn to use 2) Add that to "metrics-bind-address" in ProxyArguments 3) Configure a headless service for the metrics port with appropriate labels (see etcd for an example) 4) Create a ServiceMonitor object 5) Profit!
Clayton says he'll "take care of this..."
hi, this issue still has not fixed in 4.0, payload 4.0.0-0.nightly-2019-01-24-184525 since the target release the 4.0.0. So any progress on this?
Yes, Jacob has been making progress and this should be in soon.
Posted and has been merged https://github.com/openshift/cluster-network-operator/pull/89
Verified it on 4.0.0-0.nightly-2019-03-28-210640. Observed following SDN related metrics are now captured on Prometheus console: openshift_sdn_ovs_flows openshift_sdn_arp_cache_entries openshift_sdn_pod_ips openshift_sdn_pod_setup_errors openshift_sdn_pod_setup_latency openshift_sdn_pod_teardown_errors openshift_sdn_pod_teardown_latency Thanks!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758