Bug 2036027
Summary: | CNV 4.9.1|VMs deployments are failing due to webhook context deadline timout | ||
---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Boaz <bbenshab> |
Component: | Virtualization | Assignee: | Igor Bezukh <ibezukh> |
Status: | CLOSED ERRATA | QA Contact: | guy chen <guchen> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.9.1 | CC: | acardace, kbidarka, sradco, ycui |
Target Milestone: | --- | Keywords: | Scale |
Target Release: | 4.14.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2023-11-08 14:05:03 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Comment 6
Ying Cui
2022-11-03 06:16:40 UTC
Deferring to 4.13 due to capacity. Moving this to CNV 4.14 due to capacity. Revisiting this issue, although my idea behind https://github.com/kubevirt/kubevirt/issues/7101 - which is caching cluster/vm states, this BZ itself is no longer an issue at 4.13 with the new rate limiting functionality: To accommodate the potential surge in on-the-fly requests that a large-scale setup may generate during significant actions, I enhanced the rate-limiting configuration. Specifically, I increased the number of queries per second (QPS) from 5 to 100, allowing the system to handle a higher request rate. Additionally, I raised the burst limit from 10 to 200, enabling the system to momentarily handle a burst of requests beyond the specified QPS threshold, for more information please refer to the KCS tuning guide also, the new default is going to be set at 50/100 The updated configuration was implemented using the following command, ensuring that the system is better equipped to manage and respond to a greater volume of requests during critical operations: kubectl annotate -n kubevirt-hyperconverged hco kubevirt-hyperconverged hco.kubevirt.io/tuningPolicy='{"qps":100,"burst":200}' -n openshift-cnv Now we just need to enable the new values through annotation: kubectl patch -n kubevirt-hyperconverged hco kubevirt-hyperconverged --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", "value": "annotation"}]' -n openshift-cnv I think this issue can be resolved on account of the info above. Here is a link to customer KB for tuning: https://access.redhat.com/articles/6994974 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6817 |