Bug 1854435 - Traffic spikes every 10 minutes to master API
Summary: Traffic spikes every 10 minutes to master API
Keywords:
Status: CLOSED DUPLICATE of bug 1854434
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Ben Bennett
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-07 13:40 UTC by Dirk Porter
Modified: 2020-11-05 00:04 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-07 14:38:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dirk Porter 2020-07-07 13:40:59 UTC
Description of problem:
Customer is reporting that their f5 vip is getting ~50G more traffic during a spike that occurs every 10 minutes. Looking at the pcaps, it appears all the traffic is, obviously, going to the api. 

Looking at the amount of nodes on the cluster, it appears there are over 800. 

A early observation was the output on the sosreport of the sdn. However, the sdn pods looked fine when I looked at them: 

"root      16910  0.0  0.0  13272  3044 ?        Ss   Jun17   1:45 /bin/bash -c #!/bin/bash set -euo pipefail  # cleanup old log files rm -f /var/log/openvswitch-old/ovsdb-server.log /var/log/openvswitch-old/ovs-vswitchd.log  mkdir -p /var/log/openvswitch  # if another process is listening on the cni-server socket, wait until it exits trap 'kill $(jobs -p); exit 0' TERM retries=0 while true; do   if 
 /usr/share/openvswitch/scripts/ovs-ctl status &>/dev/null; then     echo "warning: Another process is currently managing OVS, waiting 15s ..." 2>&1     sleep 15 & wait     (( retries += 1 ))   else     break   fi   if [[ "${retries}" -gt 40 ]]; then     echo "error: Another process is currently managing OVS, exiting" 2>&1     exit 1   fi done  # launch OVS function quit {     /usr/share/openvswitch/scripts/
 ovs-ctl stop     exit 0 } trap quit SIGTERM /usr/share/openvswitch/scripts/ovs-ctl start --no-ovs-vswitchd --system-id=random  # Restrict the number of pthreads ovs-vswitchd creates to reduce the # amount of RSS it uses on hosts with many cores # https://bugzilla.redhat.com/show_bug.cgi?id=1571379 # https://bugzilla.redhat.com/show_bug.cgi?id=1572797 if [[ `nproc` -gt 12 ]]; then     ovs-vsctl --no-wait 
 set Open_vSwitch . other_config:n-revalidator-threads=4     ovs-vsctl --no-wait set Open_vSwitch . other_config:n-handler-threads=10 fi /usr/share/openvswitch/scripts/ovs-ctl start --no-ovsdb-server --system-id=random  tail --follow=name /var/log/openvswitch/ovs-vswitchd.log /var/log/openvswitch/ovsdb-server.log & sleep 20 while true; do   if ! /usr/share/openvswitch/scripts/ovs-ctl status &>/dev/null; 
then     echo "OVS seems to have crashed, exiting"     quit   fi   sleep 15 done"

I have been trying to figure out what could be causing a spike every 10 minutes within a cluster, and cannot think of a process that reports every 10 minutes that would have this sort of impact. 

Looking for assistance on what to check next. I think this might be the consequence for the amount of nodes. What could be sending traffic to the API? 

Version-Release number of selected component (if applicable):
3.11

How reproducible:
I have not seen anything that would cause this in my looking 

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Stephen Cuppett 2020-07-07 14:38:53 UTC

*** This bug has been marked as a duplicate of bug 1854434 ***


Note You need to log in before you can comment on or make changes to this bug.