Bug 2008712

Summary: VPA webhook timeout prevents all pods from starting
Product: OpenShift Container Platform Reporter: Joel Smith <joelsmith>
Component: NodeAssignee: Joel Smith <joelsmith>
Node sub component: Autoscaler (HPA, VPA) QA Contact: Weinan Liu <weinliu>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: aos-bugs
Version: 4.7   
Target Milestone: ---   
Target Release: 4.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2008713 (view as bug list) Environment:
Last Closed: 2022-03-10 16:13:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2008713    

Description Joel Smith 2021-09-29 00:10:41 UTC
Description of problem:
When VPA is installed, if its webhook service becomes slow or unreachable, the API server should ignore the failure and instead continue with pod creation.

The VPA webhook has such a long timeout that if the webhook service is in a bad state, the entire pod creation request fails without proceeding beyond the VPA webhook timeout. 

How reproducible:
100%

Steps to Reproduce:
1. Install VPA
2. oc get deployment -n openshift-vertical-pod-autoscaler vpa-admission-plugin-default -o jsonpath='{.spec.template.spec.containers[0].args}' | jq
3. oc get mutatingwebhookconfiguration vpa-webhook-config -o jsonpath='{.webhooks[0].timeoutSeconds}{"\n"}'

Actual results:
[
  "--logtostderr",
  "--v=1",
  "--tls-cert-file=/data/tls-certs/tls.crt",
  "--tls-private-key=/data/tls-certs/tls.key",
  "--client-ca-file=/data/tls-ca-certs/service-ca.crt"
]
30

Expected results:
[
  "--logtostderr",
  "--v=1",
  "--tls-cert-file=/data/tls-certs/tls.crt",
  "--tls-private-key=/data/tls-certs/tls.key",
  "--client-ca-file=/data/tls-ca-certs/service-ca.crt"
  "--webhook-timeout-seconds=10"
]
10

Comment 2 Weinan Liu 2021-11-03 10:25:29 UTC
ose-vertical-pod-autoscaler-operator-metadata-container-v4.10.0.202110121937.p0.git.f2728f4.assembly.stream-1/			 
	ose-vertical-pod-autoscaler-operator-metadata-container-v4.10.0.202110262251.p0.git.f2728f4.assembly.stream-1/			 
	ose-vertical-pod-autoscaler-operator-metadata-container-v4.10.0.202111020853.p0.git.f2728f4.assembly.stream-1/

None above passed cvp test till Nov.3 CST
Waiting for new builds to check

Comment 4 Weinan Liu 2021-11-08 05:42:50 UTC
@

Comment 5 Weinan Liu 2021-11-11 07:19:34 UTC
We kept getting CVP Test Result Status	UNSTABLE
Last two failed builds:
        ose-vertical-pod-autoscaler-operator-metadata-container-v4.10.0.202111091430.p0.git.f2728f4.assembly.stream-1/			 
	ose-vertical-pod-autoscaler-operator-metadata-container-v4.10.0.202111092129.p0.git.f2728f4.assembly.stream-1/

Comment 11 errata-xmlrpc 2022-03-10 16:13:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056