Bug 1668315 - zero for container_network_tcp_usage_total and container_network_udp_usage_total
Summary: zero for container_network_tcp_usage_total and container_network_udp_usage_total
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: Frederic Branczyk
QA Contact: Junqi Zhao
URL:
Whiteboard:
: 1668313 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-22 12:37 UTC by Junqi Zhao
Modified: 2019-06-04 10:42 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:42:06 UTC
Target Upstream Version:


Attachments (Terms of Use)
zero for container_network_tcp_usage_total (396.50 KB, image/png)
2019-01-22 12:37 UTC, Junqi Zhao
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:42:13 UTC

Description Junqi Zhao 2019-01-22 12:37:03 UTC
Created attachment 1522390 [details]
zero for container_network_tcp_usage_total

Description of problem:
Cloned from https://jira.coreos.com/browse/MON-522


See the attached picture, container_network_tcp_usage_total are all zero.
It is the same for container_network_udp_usage_total


Checked from openshift-monitoring/kubelet/1 targets, zero for container_network_tcp_usage_total and container_network_udp_usage_total from 10250/metrics/cadvisor endpoints
 

Checked from ifconfig from one node, should not be zero for container_network_tcp_usage_total and container_network_udp_usage_total

$ ifconfig
enp0s31f6: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 54:e1:ad:42:76:28 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16 memory 0xec200000-ec220000

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 20 bytes 1736 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 20 bytes 1736 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1360
inet 10.72.12.75 netmask 255.255.252.0 destination 10.72.12.75
inet6 fe80::1a99:ca46:c252:e11c prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC)
RX packets 34014 bytes 23390808 (22.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 33895 bytes 10759728 (10.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.124.1 netmask 255.255.255.0 broadcast 192.168.124.255
ether 52:54:00:4f:23:11 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

wlp58s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.103 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::5065:3d:fad9:6b31 prefixlen 64 scopeid 0x20<link>
ether f8:59:71:49:16:c8 txqueuelen 1000 (Ethernet)
RX packets 66255 bytes 62641848 (59.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 44760 bytes 15410190 (14.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


Version-Release number of selected component (if applicable):
quay.io/openshift/origin-configmap-reloader:latest
quay.io/openshift/origin-grafana:latest
quay.io/openshift/origin-k8s-prometheus-adapter:latest
quay.io/openshift/origin-kube-rbac-proxy:latest
quay.io/openshift/origin-kube-state-metrics:latest
quay.io/openshift/origin-oauth-proxy:latest
quay.io/openshift/origin-prom-label-proxy:latest
quay.io/openshift/origin-prometheus-alertmanager:latest
quay.io/openshift/origin-prometheus-config-reloader:latest
quay.io/openshift/origin-prometheus-node-exporter:latest
quay.io/openshift/origin-prometheus-operator:latest
quay.io/openshift/origin-prometheus:latest
quay.io/openshift/origin-telemeter:latest
registry.svc.ci.openshift.org/ocp/4.0-art-latest-2019-01-18-115403@sha256:e947243a47f297cad62cfb3c9981a3e9a73aec50152af688b2472983175938aa


How reproducible:
Always

Steps to Reproduce:
1. Check container_network_tcp_usage_total and container_network_udp_usage_total in prometheus UI
2.
3.

Actual results:
zero for container_network_tcp_usage_total and container_network_udp_usage_total

Expected results:
Should not be zero for container_network_tcp_usage_total and container_network_udp_usage_total

Additional info:

Comment 1 Frederic Branczyk 2019-02-04 16:05:05 UTC
> Checked from openshift-monitoring/kubelet/1 targets, zero for container_network_tcp_usage_total and container_network_udp_usage_total from 10250/metrics/cadvisor endpoints

If I understand you correctly, you're saying cadvisor doesn't even expose these correctly? Have you recently tried this, as the 1.12 rebase may fix it?

Comment 2 Frederic Branczyk 2019-02-04 16:06:23 UTC
*** Bug 1668313 has been marked as a duplicate of this bug. ***

Comment 3 Frederic Branczyk 2019-02-06 13:37:54 UTC
Actually it turns out, these are intended to be 0, as they are "disabled" metrics in cAdvisor, which makes cAdvisor still expose them but all of them set to 0. We'll make sure to drop these metrics at ingestion to avoid confusion.

The respective upstream issue: https://github.com/google/cadvisor/issues/1925

Comment 4 Frederic Branczyk 2019-02-07 11:17:19 UTC
As the metric from cAdvisor is disabled we are dropping it as of this pull request: https://github.com/openshift/cluster-monitoring-operator/pull/235

From now we shouldn't see the the metric at all anymore.

Comment 6 Junqi Zhao 2019-02-19 03:44:33 UTC
container_network_tcp_usage_total and container_network_udp_usage_total metrics are removed.
# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.nightly-2019-02-18-224151   True        False         57m     Cluster version is 4.0.0-0.nightly-2019-02-18-224151


configmap-reloader: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24eb3125b5fec17e2db68b7fcd406d5aecba67ebe6da18fbd9c2c7e884ce00f8
cluster-monitoring-operator: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d0d8d43b79fb970a7a090a759da06aebb1dec7e31fffd2d3ed455f92a998522
prometheus-config-reloader: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31905d24b331859b99852c6f4ef916539508bfb61f443c94e0f46a83093f7dc0
kube-state-metrics: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0b3aa9c8923c95233f2872a6d4842796ab202a91faa8595518ad6a154f1d87
kube-rbac-proxy: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:451274b24916b97e5ba2116dd0775cdb7e1de98d034ac8874b81c1a3b22cf6b1
k8s-prometheus-adapter: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:580e5a5cd057e2c09ea132fed5c75b59423228587631dcd47f9471b0d1f9a872
prometheus-operator: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b4ba55ab5ec5bb1b4c024a7b99bc67fe108a28e564288734f9884bc1055d4ed
prometheus-node-exporter: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de207bf1cdbdcbe54fe97684d6b3aaf9d362a46f7d0a7af1e989cdf57b59599
prometheus-alertmanager: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d8b88bd937ccf01b9cb2584ceb45b829406ebc3b35201f73eead00605b4fdfc
prometheus: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b50f38e8f288fdba31527bfcb631d0a15bb2c9409631ef30275f5483946aba6f
telemeter: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6cbfe8c7034edf8d0df1df4208543fe5f37a8ad306eaf736bcd7c1cbb999ffc
prom-label-proxy: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efe301356a6f40679e27e6e8287ed6d8316e54410415f4f3744f3182c1d4e07e
grafana: quay.io/openshift/origin-grafana:latest
oauth-proxy: quay.io/openshift/origin-oauth-proxy:latest


RHCOS build: 47.318

Comment 9 errata-xmlrpc 2019-06-04 10:42:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.