Bug 1986970 - Node outages can lead to (legitimate) mass restarts of VMs which can block our controller
Summary: Node outages can lead to (legitimate) mass restarts of VMs which can block ou...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 2.6.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.0
Assignee: Roman Mohr
QA Contact: Denys Shchedrivyi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-28 15:47 UTC by Roman Mohr
Modified: 2022-03-16 15:53 UTC (History)
4 users (show)

Fixed In Version: virt-operator-container-v4.10.0-142 hco-bundle-registry-container-v4.10.0-479
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-16 15:51:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2022:0947 0 None None None 2022-03-16 15:53:09 UTC

Description Roman Mohr 2021-07-28 15:47:14 UTC
Description of problem:

Right now kubevirt uses very low and not configurable QPS values for the kubernetes rate limiters. If any operations are done which lead to mass-restarts of VMs, CNV controllers can run into QPS limits which will lead to launcher pod timeouts while it is waiting for kvm to be started. As a consequence no VMs can be started anymore, since the timeout leads to the VMI recreation which in turn creates pressure on the ratelimiter and so forth.

Version-Release number of selected component (if applicable):


How reproducible:


force-delete a lot of VMIs and wait for the VM controllers to recover. It will at least take a very very long time until at least a part of the VMs will run again.


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


We should introduce higher QPS limits, make them in addition configurable and finally expose the rate limit prometheus metric to allow monitoring rate limit hits.


Additional info:


Applicable to all CNV versions

Comment 2 sgott 2021-12-15 13:26:41 UTC
to verify, repeat the BZ description

Comment 3 Denys Shchedrivyi 2022-01-31 17:29:37 UTC
 Verified by comparing spent time in CNV 4.8.0 and v4.10.0-636 for creating 100 replicas of VMIs.

In 4.8 on my environment after updating "replica" values it took around *40* seconds for first pods to appear on a cluster
In 4.10 with default qps values it takes around *10* seconds. Decreasing qps values in a config - expectedly increase the time of processing. 

In my opinion, we can consider this bug as fixed.

Comment 8 errata-xmlrpc 2022-03-16 15:51:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0947


Note You need to log in before you can comment on or make changes to this bug.