Description of problem: SSP is having issues when tlsversion is set to 1.3. Keep HCO on default and make ssp to work with tls1.3 only. $ oc get ssp ssp-kubevirt-hyperconverged -ojsonpath={.spec.tlsSecurityProfile} {"custom":{"ciphers":["TLS_AES_128_GCM_SHA256","TLS_CHACHA20_POLY1305_SHA256"],"minTLSVersion":"VersionTLS13"},"type":"Custom"} $ oc get services| grep ssp ssp-operator-metrics ClusterIP 172.30.72.113 <none> 443/TCP 12h ssp-operator-service ClusterIP 172.30.236.4 <none> 9443/TCP 3m40s Now try to connect with different TLS versions TLS1.1 sh-4.4# openssl s_client -connect 172.30.236.4:9443 --tls1_1 CONNECTED(00000003) 140273104361280:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70 TLS 1.2 sh-4.4# openssl s_client -connect 172.30.236.4:9443 --tls1_2 CONNECTED(00000003) 139768527726400:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70 TLS 1.3 sh-4.4# openssl s_client -connect 172.30.236.4:9443 --tls1_3 CONNECTED(00000003) 140334848022336:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70 --- --- no peer certificate available So we cannot connect to ssp service .Protocol version should match when we select tls1.3. after sometime: [cnv-qe-jenkins@c01-gk412sep20-jpkkx-executor ~]$ oc get hco kubevirt-hyperconverged -o yaml apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: creationTimestamp: "2022-09-21T23:31:36Z" finalizers: - kubevirt.io/hyperconverged generation: 2 labels: app: kubevirt-hyperconverged name: kubevirt-hyperconverged namespace: openshift-cnv resourceVersion: "8155626" uid: 52a0211b-db23-42a0-8521-c627fd3bbb09 spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s server: duration: 24h0m0s renewBefore: 12h0m0s featureGates: deployTektonTaskResources: false enableCommonBootImageImport: true nonRoot: true withHostPassthroughCPU: false infra: {} liveMigrationConfig: completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 uninstallStrategy: BlockUninstallIfWorkloadsExist workloadUpdateStrategy: batchEvictionInterval: 1m0s batchEvictionSize: 10 workloadUpdateMethods: - LiveMigrate workloads: {} status: conditions: - lastTransitionTime: "2022-09-22T11:36:12Z" message: 'Error while reconciling: Internal error occurred: failed calling webhook "validation.ssp.kubevirt.io": failed to call webhook: Post "https://ssp-operator-service.openshift-cnv.svc:9443/validate-ssp-kubevirt-io-v1beta1-ssp?timeout=10s": remote error: tls: protocol version not supported' observedGeneration: 2 reason: ReconcileFailed status: "False" type: ReconcileComplete - lastTransitionTime: "2022-09-24T22:06:44Z" message: |- KubeVirt is not available: An error occurred during deployment: unable to patch route &Route{ObjectMeta:{virt-exportproxy openshift-cnv 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app.kubernetes.io/component:compute app.kubernetes.io/managed-by:virt-operator app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.12.0] map[kubevirt.io/generation:1 kubevirt.io/install-strategy-identifier:1bb7763f6beeac35d51f400773963c91dbd41956 kubevirt.io/install-strategy-registry:registry.redhat.io/container-native-virtualization kubevirt.io/install-strategy-version:sha256:9d4aa17d99a51adacf7c66e3fff240df179055acba38d36462136efc1b2adfff] [] [] []},Spec:RouteSpec{Host:,Path:,To:RouteTargetReference{Kind:Service,Name:virt-exportproxy,Weight:nil,},AlternateBackends:[]RouteTargetReference{},Port:nil,TLS:&TLSConfig{Termination:reencrypt,Certificate:,Key:,CACertificate:,DestinationCACertificate:-----BEGIN CERTIFICATE----- MIIDBjCCAe6gAwIBAgIIEVP7H4Lo7e0wDQYJKoZIhvcNAQELBQAwITEfMB0GA1UE AwwWa3ViZXZpcnQuaW9AMTY2NDA2MjMyODAeFw0yMjA5MjQyMzMyMDhaFw0yMjA5 MjYyMzMyMDhaMCExHzAdBgNVBAMMFmt1YmV2aXJ0LmlvQDE2NjQwNjIzMjgwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCuEWpyW3jKALfIm+tFgWdxwP6k M8vIMgtSucQebXajkdtiL+WWpMPMbeoaoKTdOHr6djw5BuZzu8Ps11J5xdzdSSzA FHHJSDtBVrTg7ZSKO6GXAAXpGVTYyrpfpLO73gJ2nfgwaTTku0SKK224jw2EKJ6X bIHCIFP7GlU/QdqnAGn5MQdXLGM0V9GIiaKjPAJWRLvRYNbERllI8E4KcXNyR5sB 0+poj7vmqv7qjZmPzGenYvkaDvYeSaWs67A4f+u1/omPfq3gGBu/rMRUTOkwl/pl X06J1AJabaxticO7q5BtI+YrpfuUx4MpxarNpGKfTFpdw4fGk49Z/LBiNBz7AgMB AAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW BBTsgUIDr8trNKhQMclfnuqsPtkA8TANBgkqhkiG9w0BAQsFAAOCAQEABF5JNRgc Nb6ALMmc3oR7FOADevG/TPg/cJ2VnsvTxAMEg5RTqXfVAafyBZ0JO+axdi2NTjdL YRSU0aWJ67eU5YMCAcRZsGwIO39vkEiG4DACnXSk590LBREfvXcLFEgUZ3GDHIBk 9qS1b3AiG6LMRpIfZkpL7iQvDBcsCTrAtjLnugE2dfJG6KkzThoWvNk98h5IcSkC 25GUW7PoINWwQRUL7OWpyenh0nLIt96upLDuMOoI6K9vmWlRnUm4GczpZnSFtJqZ hCTJRHJzff8LKtjzbneiWGdNEoPCtkvr27ff7mOsfhdSQgQeSZD0Y16asID08vy6 zZib0QAhZmKQQA== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDBjCCAe6gAwIBAgIICUkkPD2QvGAwDQYJKoZIhvcNAQELBQAwITEfMB0GA1UE AwwWa3ViZXZpcnQuaW9AMTY2Mzk3NTkyODAeFw0yMjA5MjMyMzMyMDhaFw0yMjA5 MjUyMzMyMDhaMCExHzAdBgNVBAMMFmt1YmV2aXJ0LmlvQDE2NjM5NzU5MjgwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDdAO7xacotEvBfnG3hgcweQE0J bV76WdC+a7mCDL43iul3hXQZdWDELJJYD/tkDuD51K1cuip9XXzC57VM0YJpNk4W nNZbWro8yPqcqKYAuHr2si0ucjy2nwHpa6KYILna8hrE7wIaaqyqH7J5/52qeMAo mJZEdaEYhvqbKw9Z/7oRGfnLLXx6Mtss6OUmi+RwlUA1nnk/qr0hGceE2M+4Fdqr CtszH/fvQjqYgOe/NQTz24YWY662Y+3U5boPTO2USp6c3zTEfEFvf18bmVeip8ml T5L0ua60jNe6EBUkzSQKecTg6AIil5HlD4VKfKUaiELfBfoexrKL4t8AH9gHAgMB AAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW BBRnW5hmygg8LRSfmAWq7j4rgyfJjTANBgkqhkiG9w0BAQsFAAOCAQEAD3AViU8t sXlSO241OrGc3eMZbRQ0gOrEx7d/utU4McbC3Hgv3HaiJ+qlq8yUeNthhWE/h+KV QuXmU62XX9ZDg5kWCdvH1OwpvgyV3oqXr5x2Pl/paSWluA/SYB+u+TzFPcKraCcq ZEA1w4Mya6qz5hxX6AL3bg9GFW5itOZKPVWWNxwawdja3DCzUJ5kSoydYDB2EUp3 RrHMWY4gyfAWp5PTInRWYBPa72fTmQwzOIl/73ZJRZ3li/gVXM618kP4g4ML2oLj NFktIdcpK9CNzfpKpQP90Y1TOMjWt/pxYTmHEkBjF1RMqKEJzEf0BWTLEv/sJv3G biLqrdkiMp/tHQ== -----END CERTIFICATE----- ,InsecureEdgeTerminationPolicy:,},WildcardPolicy:,Subdomain:,},Status:RouteStatus{Ingress:[]RouteIngress{},},}: the server is currently unable to handle the request (patch routes.route.openshift.io virt-exportproxy) observedGeneration: 2 reason: KubeVirtNotAvailable status: "False" type: Available - lastTransitionTime: "2022-09-24T23:32:08Z" message: Unknown Status observedGeneration: 2 reason: StatusUnknown status: Unknown type: Progressing - lastTransitionTime: "2022-09-24T23:32:08Z" message: |- KubeVirt is degraded: An error occurred during deployment: unable to patch route &Route{ObjectMeta:{virt-exportproxy openshift-cnv 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app.kubernetes.io/component:compute app.kubernetes.io/managed-by:virt-operator app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.12.0] map[kubevirt.io/generation:1 kubevirt.io/install-strategy-identifier:1bb7763f6beeac35d51f400773963c91dbd41956 kubevirt.io/install-strategy-registry:registry.redhat.io/container-native-virtualization kubevirt.io/install-strategy-version:sha256:9d4aa17d99a51adacf7c66e3fff240df179055acba38d36462136efc1b2adfff] [] [] []},Spec:RouteSpec{Host:,Path:,To:RouteTargetReference{Kind:Service,Name:virt-exportproxy,Weight:nil,},AlternateBackends:[]RouteTargetReference{},Port:nil,TLS:&TLSConfig{Termination:reencrypt,Certificate:,Key:,CACertificate:,DestinationCACertificate:-----BEGIN CERTIFICATE----- MIIDBjCCAe6gAwIBAgIIEVP7H4Lo7e0wDQYJKoZIhvcNAQELBQAwITEfMB0GA1UE AwwWa3ViZXZpcnQuaW9AMTY2NDA2MjMyODAeFw0yMjA5MjQyMzMyMDhaFw0yMjA5 MjYyMzMyMDhaMCExHzAdBgNVBAMMFmt1YmV2aXJ0LmlvQDE2NjQwNjIzMjgwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCuEWpyW3jKALfIm+tFgWdxwP6k M8vIMgtSucQebXajkdtiL+WWpMPMbeoaoKTdOHr6djw5BuZzu8Ps11J5xdzdSSzA FHHJSDtBVrTg7ZSKO6GXAAXpGVTYyrpfpLO73gJ2nfgwaTTku0SKK224jw2EKJ6X bIHCIFP7GlU/QdqnAGn5MQdXLGM0V9GIiaKjPAJWRLvRYNbERllI8E4KcXNyR5sB 0+poj7vmqv7qjZmPzGenYvkaDvYeSaWs67A4f+u1/omPfq3gGBu/rMRUTOkwl/pl X06J1AJabaxticO7q5BtI+YrpfuUx4MpxarNpGKfTFpdw4fGk49Z/LBiNBz7AgMB AAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW BBTsgUIDr8trNKhQMclfnuqsPtkA8TANBgkqhkiG9w0BAQsFAAOCAQEABF5JNRgc Nb6ALMmc3oR7FOADevG/TPg/cJ2VnsvTxAMEg5RTqXfVAafyBZ0JO+axdi2NTjdL YRSU0aWJ67eU5YMCAcRZsGwIO39vkEiG4DACnXSk590LBREfvXcLFEgUZ3GDHIBk 9qS1b3AiG6LMRpIfZkpL7iQvDBcsCTrAtjLnugE2dfJG6KkzThoWvNk98h5IcSkC 25GUW7PoINWwQRUL7OWpyenh0nLIt96upLDuMOoI6K9vmWlRnUm4GczpZnSFtJqZ hCTJRHJzff8LKtjzbneiWGdNEoPCtkvr27ff7mOsfhdSQgQeSZD0Y16asID08vy6 zZib0QAhZmKQQA== -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDBjCCAe6gAwIBAgIICUkkPD2QvGAwDQYJKoZIhvcNAQELBQAwITEfMB0GA1UE AwwWa3ViZXZpcnQuaW9AMTY2Mzk3NTkyODAeFw0yMjA5MjMyMzMyMDhaFw0yMjA5 MjUyMzMyMDhaMCExHzAdBgNVBAMMFmt1YmV2aXJ0LmlvQDE2NjM5NzU5MjgwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDdAO7xacotEvBfnG3hgcweQE0J bV76WdC+a7mCDL43iul3hXQZdWDELJJYD/tkDuD51K1cuip9XXzC57VM0YJpNk4W nNZbWro8yPqcqKYAuHr2si0ucjy2nwHpa6KYILna8hrE7wIaaqyqH7J5/52qeMAo mJZEdaEYhvqbKw9Z/7oRGfnLLXx6Mtss6OUmi+RwlUA1nnk/qr0hGceE2M+4Fdqr CtszH/fvQjqYgOe/NQTz24YWY662Y+3U5boPTO2USp6c3zTEfEFvf18bmVeip8ml T5L0ua60jNe6EBUkzSQKecTg6AIil5HlD4VKfKUaiELfBfoexrKL4t8AH9gHAgMB AAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW BBRnW5hmygg8LRSfmAWq7j4rgyfJjTANBgkqhkiG9w0BAQsFAAOCAQEAD3AViU8t sXlSO241OrGc3eMZbRQ0gOrEx7d/utU4McbC3Hgv3HaiJ+qlq8yUeNthhWE/h+KV QuXmU62XX9ZDg5kWCdvH1OwpvgyV3oqXr5x2Pl/paSWluA/SYB+u+TzFPcKraCcq ZEA1w4Mya6qz5hxX6AL3bg9GFW5itOZKPVWWNxwawdja3DCzUJ5kSoydYDB2EUp3 RrHMWY4gyfAWp5PTInRWYBPa72fTmQwzOIl/73ZJRZ3li/gVXM618kP4g4ML2oLj NFktIdcpK9CNzfpKpQP90Y1TOMjWt/pxYTmHEkBjF1RMqKEJzEf0BWTLEv/sJv3G biLqrdkiMp/tHQ== -----END CERTIFICATE----- ,InsecureEdgeTerminationPolicy:,},WildcardPolicy:,Subdomain:,},Status:RouteStatus{Ingress:[]RouteIngress{},},}: the server is currently unable to handle the request (patch routes.route.openshift.io virt-exportproxy) observedGeneration: 2 SSP Operator logs : {"level":"info","ts":1664198479.9558008,"logger":"setup","msg":"Got Ciphers and tlsProfile:","ciphers: ":["TLS_AES_128_GCM_SHA256","TLS_CHACHA20_POLY1305_SHA256"],"tlsProfile: ":"VersionTLS13"} {"level":"info","ts":1664198479.955843,"logger":"setup","msg":"Starting Prometheus metrics endpoint server with TLS"} I0926 13:21:21.006907 1 request.go:601] Waited for 1.044049149s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/performance.openshift.io/v1alpha1?timeout=32s 2022/09/26 13:21:30 http: TLS handshake error from 10.128.2.26:35328: tls: client offered only unsupported versions: [304 303] This puts us in a deadlock state where nothing can be done because in order to change SSP configuration we need to get it accepted by its validating webhook but we cannot do that due to this issue and now we cannot alter the configuration to the default as well. Version-Release number of selected component (if applicable): 4.12 How reproducible: always Steps to Reproduce: 1. same ad mentioned in description 2. 3. Actual results: SSP is not accepting tls1.3 and we HCO failed to bring back to a working state due to validation errors. Expected results: HCO should be able to change SSP configuration and it needs to get it accepted by its validating webhook Additional info:
Test Env: Deployed: OCP-4.12.0-rc.2 Deployed: CNV-v4.12.0-745 Test steps: Test Case 1: 1. patch ssp with custom TLS version13. 2. Make sure there are no errors reported. 3. Check ssp resources and see it is getting replaced with HCO settings. Test case 2: 1. Set HCO tlsSecurityProfile as old. oc get hco kubevirt-hyperconverged -n openshift-cnv -ojsonpath={.spec.tlsSecurityProfile} {"old":{},"type":"Old"} 2. Set ssp tlssecurityProfile explicitly to custom. oc patch ssp -n openshift-cnv --type=json ssp-kubevirt-hyperconverged -p '[{"op": "replace", "path": /spec/tlsSecurityProfile, "value": {custom: {minTLSVersion: "VersionTLS13", ciphers: ["TLS_AES_128_GCM_SHA256", "TLS_CHACHA20_POLY1305_SHA256"]}, type: "Custom"} }]' 3. Expected is HCO should try to propogate it's tls settings to ssp. $ oc get ssp ssp-kubevirt-hyperconverged -n openshift-cnv -ojsonpath={.spec.tlsSecurityProfile} {"old":{},"type":"Old"} However during this whole procedure, ssp(ssp-operator-79bbc48bc5-tch2n) pod continue to be in CrashLoopBackOff for nearly ~5 mins. oc get pods -A -w | grep -i ssp openshift-cnv ssp-operator-79bbc48bc5-tch2n 0/1 CrashLoopBackOff 10 (4m54s ago) 28h is it okay to wait 5-6 minutes or it should be faster?
SSP worked with TLS 1.3 and not raising validation error now but frequent ssp pods crashing is seen and at times it takes almost 5-6 mins for pods to back up. Based on discussion this needs to be fixed. Raising other bug(https://bugzilla.redhat.com/show_bug.cgi?id=2151248) for that issue and closing this.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.12.0 Images security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0408