Description of problem: Reproduced in 4.4.0-0.okd-2020-03-10-052608 machine-config-operator is using < 4 MBps according to Kubernetes / Networking / Cluster dashboard in Grafana, although no host-related tasks are in progress. PromQL query: topk(5, sum(irate(container_network_receive_bytes_total{namespace='openshift-machine-config-operator'}[5m])) by (pod, namespace)) data: {namespace="openshift-machine-config-operator",pod="etcd-quorum-guard-6c84498b67-b478g"} 1419293.650793651 {namespace="openshift-machine-config-operator",pod="machine-config-daemon-pkwxr"} 1309700.038358266 {namespace="openshift-machine-config-operator",pod="machine-config-server-tm8t8"} 1122166.6219479474 {namespace="openshift-machine-config-operator",pod="etcd-quorum-guard-6c84498b67-tzq89"} 920098.4322019672 {namespace="openshift-machine-config-operator",pod="machine-config-daemon-rq659"} 900368.253047011 topk(5, sum(irate(container_network_transmit_bytes_total{namespace='openshift-machine-config-operator'}[5m])) by (pod, namespace)): {namespace="openshift-machine-config-operator",pod="etcd-quorum-guard-6c84498b67-8j26k"} 987980.2409884613 {namespace="openshift-machine-config-operator",pod="machine-config-daemon-f2csf"} 831509.3057231784 {namespace="openshift-machine-config-operator",pod="machine-config-server-tm8t8"} 797245.3039967231 {namespace="openshift-machine-config-operator",pod="machine-config-daemon-pkwxr"} 789757.2120102048 {namespace="openshift-machine-config-operator",pod="etcd-quorum-guard-6c84498b67-b478g"} 783948.7519474922
MCD uses hostnetwork so metrics for these pods are including the traffic for the whole node. MCD itself doesn't use too much bandswidth