Bug 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
Summary: [scale] failed to get pod annotation: timed out waiting for annotations
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: All
OS: All
high
medium
Target Milestone: ---
: 4.10.0
Assignee: Tim Rozet
QA Contact: Mike Fiedler
URL:
Whiteboard: perfscale-ovn
: 1999704 2005985 2007009 (view as bug list)
Depends On:
Blocks: 1997072 2005985 2076201
TreeView+ depends on / blocked
 
Reported: 2021-05-11 10:55 UTC by Joe Talerico
Modified: 2023-09-18 00:26 UTC (History)
26 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1997072 2034645 2076201 (view as bug list)
Environment:
Last Closed: 2022-03-10 16:03:38 UTC
Target Upstream Version:
Embargoed:
trozet: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github eBay libovsdb pull 42 0 None None None 2021-09-10 16:32:53 UTC
Github openshift ovn-kubernetes pull 667 0 None None None 2021-08-18 20:24:25 UTC
Github openshift ovn-kubernetes pull 750 0 None open Bug 1959352: phase 2 scale improvements 2021-09-15 20:41:14 UTC
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:04:05 UTC

Internal Links: 2052415

Description Joe Talerico 2021-05-11 10:55:30 UTC
Description of problem:
Running with 300 nodes, during the cluster density test with 2k iterations we noticed some pods, struggling to get annotated,

 Warning  FailedCreatePodSandBox  36m  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-density-3-1-build_cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971_4e1f7733-050c-4709-acb1-7a48cf0b5e41_0(9fa18910b97eaf6ded8028cfd629829fa829f2f710441a79a5db319cd528faa3): [cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build 9fa18910b97eaf6ded8028cfd629829fa829f2f710441a79a5db319cd528faa3] [cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build 9fa18910b97eaf6ded8028cfd629829fa829f2f710441a79a5db319cd528faa3] failed to get pod annotation: timed out waiting for annotations
'
  Warning  FailedCreatePodSandBox  35m  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-density-3-1-build_cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971_4e1f7733-050c-4709-acb1-7a48cf0b5e41_0(22065b7097a84cdb83d7c854a94ff2ef45212a1ecf7f183ca59891cd067a7199): [cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build 22065b7097a84cdb83d7c854a94ff2ef45212a1ecf7f183ca59891cd067a7199] [cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build 22065b7097a84cdb83d7c854a94ff2ef45212a1ecf7f183ca59891cd067a7199] failed to get pod annotation: timed out waiting for annotations
'
  Warning  FailedCreatePodSandBox  34m  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-density-3-1-build_cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971_4e1f7733-050c-4709-acb1-7a48cf0b5e41_0(5731561b9589a91808c71623ebc7c3776a3d808631a069d2e8bf39b50faf3c66): [cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build 5731561b9589a91808c71623ebc7c3776a3d808631a069d2e8bf39b50faf3c66] [cluster-density-608dafbd-77a3-4122-a724-1ce9a5401499-971/cluster-density-3-1-build 5731561b9589a91808c71623ebc7c3776a3d808631a069d2e8bf39b50faf3c66] failed to get pod annotation: timed out waiting for annotations
'


Version-Release number of selected component (if applicable):
4.8

How reproducible:
100%

Steps to Reproduce:
1. Deploy OCP 4.8 w/ OVNKubernetes
2. Scale to 300 workers
3. Run kubeburner with clusterdesnity set to 2k iterations

Comment 1 Jose Castillo Lema 2021-07-23 13:12:53 UTC
We are getting the same timeouts in two different baremetal perf testing environments:

  1. 120 node, OCP 4.8.0-fc.9
  Speciallly with node-density-light and high pod/worker tests.
  From a 120 node and 250 pods/worker node-density-light testing, taking a sample of 100 pods we see 202 events of this type:
  $ for i in {14000..14100}; do oc describe po -n node-density-f66eaffd-833b-4270-b7f4-9e5c6ffcf126 node-density-$i | grep "waiting for annotations"; done | wc -l
  202

  This is probably causing/related to considerably larger pod latency creation times.

  Env:
   - OCP 4.8.0-fc.9
   - local gateway
   - ovn2.13-20.12.0-25.el8fdp.x86_64


  2. 10 node, OCP 4.7.20
  In this case we only see the timeouts errors within a ICNI2 setup, in the served/app pods.
  For the 25 pods/worker test, we see 40 timeouts events across all pods in the cluster.
  For the 50 pods/worker test, we observe 298 timeout events.
  For the 100 pods/worker, 1953 events.
  The same tests, without ICNI2 annotations, present 0 timeout events.

Comment 2 Tim Rozet 2021-07-27 15:13:46 UTC
The root cause of this issue is a case where ovnkube-master cannot keep up with how many pods are getting created, and the ovnkube-node CNI is waiting on ovnkube-master to annotate the pods. We currently have 15 pod handlers in ovnkube-master. Doing some scale testing with some mocks I see:

100msec delay, 5k pods
1 handler - 504 sec            
15 handlers -15 sec
150 handlers - 6 sec

300msec delay, 5k pods
15 - handlers -117s
150- handlers -16s

the "delay" above is the assumed amount of time addLogicalPort takes. It's usually around 100ms on a non-loaded cluster, around 300ms on a heavily loaded cluster. We can see a big drop when creating 5k pods at once.

Running this with Joe on the scale lab node-density test we see no more of these errors, with the cost of memory increasing by about 130mb on the ovnkube-master. I can get this memory down some, by tweaking the handlers further. There was no significant increase in CPU usage over the run. Also, pod ready latency dropped from 30s down to 20s.

Comment 4 Kedar Kulkarni 2021-08-24 14:58:59 UTC
Hi, 

Since @jtaleric , @trozet confirmed that the error can still be seen in some of their testing, I am moving this back to Assigned until some additional fixes are in place.

Thanks,
KK.

Comment 6 Tim Rozet 2021-09-10 16:32:06 UTC
To fully fix this issue where ovnkube->nbdb is a bottleneck we are going to need a series of more fixes. The problem is two fold:
1. ovnkube-master is too slow, so it takes too long to annotate pods during large scale at 20qps
2. nbdb is going to 100% cpu, causing ovnkube-master to slow down

I'll add the PRs to the external trackers on this bug as I push them, and then we will need to get them all downstream in a PR to fix this issue.

Comment 7 Tim Rozet 2021-09-15 18:31:52 UTC
Further investigation has shown that with optimizations ovnkube-master and nbdb, that we can get to around 12k pods at creation rate of 20/sec before NBDB starts backing up and becoming the bottleneck. This is roughly halfway completing a 120 node density light test. Further optimizations to ovsdb-server for NBDB would be required in order to make further progress for this scenario. We have a bunch of key fixes related to this BZ that will help perf/scale. I think we should focus on getting those in with this bz and resolving this. Then we can open another bug related to poor performance of ovsdb-server.

Will post the final PR to resolve this bz soon.

Comment 8 Tim Rozet 2021-09-29 15:30:23 UTC
*** Bug 2007009 has been marked as a duplicate of this bug. ***

Comment 9 Tim Rozet 2021-09-29 15:31:58 UTC
*** Bug 2005985 has been marked as a duplicate of this bug. ***

Comment 10 Tim Rozet 2021-09-29 15:33:58 UTC
*** Bug 1999704 has been marked as a duplicate of this bug. ***

Comment 14 Mike Fiedler 2021-10-04 20:22:35 UTC
Verified on 4.10.0-0.nightly-2021-10-02-095441 via multiple runs of the PerfScale cluster density and node density tests at 120 node scale.   No FailedCreatePodSandbox events with reason of pod annotation timeouts.  There were some with OVS port binding timeouts.

Comment 41 errata-xmlrpc 2022-03-10 16:03:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056

Comment 50 Red Hat Bugzilla 2023-09-18 00:26:35 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.