Bug 2036027 - CNV 4.9.1|VMs deployments are failing due to webhook context deadline timout
Summary: CNV 4.9.1|VMs deployments are failing due to webhook context deadline timout
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.9.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.14.0
Assignee: Igor Bezukh
QA Contact: guy chen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-29 11:10 UTC by Boaz
Modified: 2023-11-08 14:05 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 14:05:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-15550 0 None None None 2022-11-03 06:18:03 UTC
Red Hat Product Errata RHSA-2023:6817 0 None None None 2023-11-08 14:05:28 UTC

Comment 6 Ying Cui 2022-11-03 06:16:40 UTC
https://github.com/kubevirt/kubevirt/issues/7101#issuecomment-1160246649

The bug is not fixed yet it will not be tested, then moving it back to POST.

Comment 7 Antonio Cardace 2022-11-18 10:08:32 UTC
Deferring to 4.13 due to capacity.

Comment 9 Kedar Bidarkar 2023-03-01 13:51:43 UTC
Moving this to CNV 4.14 due to capacity.

Comment 10 Boaz 2023-08-23 06:21:14 UTC
Revisiting this issue, although my idea behind https://github.com/kubevirt/kubevirt/issues/7101 - which is caching cluster/vm states, this BZ itself is no longer an issue at 4.13 with the new rate limiting functionality: 

To accommodate the potential surge in on-the-fly requests that a large-scale setup may generate during significant actions, I enhanced the rate-limiting configuration. Specifically, I increased the number of queries per second (QPS) from 5 to 100, allowing the system to handle a higher request rate. Additionally, I raised the burst limit from 10 to 200, enabling the system to momentarily handle a burst of requests beyond the specified QPS threshold, for more information please refer to the KCS tuning guide
also, the new default is going to be set at 50/100

The updated configuration was implemented using the following command, ensuring that the system is better equipped to manage and respond to a greater volume of requests during critical operations:
kubectl annotate -n kubevirt-hyperconverged hco kubevirt-hyperconverged hco.kubevirt.io/tuningPolicy='{"qps":100,"burst":200}' -n openshift-cnv


Now we just need to enable the  new values through annotation:
kubectl patch -n kubevirt-hyperconverged hco kubevirt-hyperconverged --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", "value": "annotation"}]' -n openshift-cnv


I think this issue can be resolved on account of the info above.

Comment 11 Igor Bezukh 2023-08-23 11:18:40 UTC
Here is a link to customer KB for tuning: https://access.redhat.com/articles/6994974

Comment 14 errata-xmlrpc 2023-11-08 14:05:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6817


Note You need to log in before you can comment on or make changes to this bug.