Bug 1981633
Summary: | enhance service-ca injection | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | OpenShift BugZilla Robot <openshift-bugzilla-robot> |
Component: | service-ca | Assignee: | David Eads <deads> |
Status: | CLOSED ERRATA | QA Contact: | liyao |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.9 | CC: | aos-bugs, mfojtik, sidsharm, surbania, xxia |
Target Milestone: | --- | ||
Target Release: | 4.8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-07-27 23:13:47 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1981498 | ||
Bug Blocks: | 1981634 |
Description
OpenShift BugZilla Robot
2021-07-13 03:16:24 UTC
Use the cluster-bot to launch the env with the still open but Dev-approved PR(s) to do the pre-merge verification. Test in fresh 4.8 env 1. check openshift-service-ca.crt $ oc new-project testproj $ oc get cm openshift-service-ca.crt -o yaml # annotation service.beta.openshift.io/inject-cabundle: "true" is seen, and only one cert in service-ca.crt field, this is secure as expected 2. check other configmap $ oc create cm testconfigmap --from-literal=key=value $ oc annotate cm testconfigmap service.alpha.openshift.io/inject-vulnerable-legacy-cabundle=true Check `oc get cm testconfigmap -o yaml`, no service-ca.crt field is seen, this means service.alpha.openshift.io/inject-vulnerable-legacy-cabundle=true only takes effect for configmap named "openshift-service-ca.crt" Test upgrade from 4.7 to 4.8 1. check openshift-service-ca.crt after upgrade to 4.8 $ oc adm upgrade --to-image=registry.build01.ci.openshift.org/ci-ln-mqnmdj2/release:latest --force=true --allow-explicit-upgrade=true $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.ci.test-2021-07-15-073410-ci-ln-mqnmdj2-latest True False 26m Cluster version is 4.8.0-0.ci.test-2021-07-15-073410-ci-ln-mqnmdj2-latest $ oc get cm openshift-service-ca.crt -o yaml # annotation service.alpha.openshift.io/inject-vulnerable-legacy-cabundle is seen, and multiple certs in service-ca.crt field, which is expected with 4.7 behavior 2. check openshift-service-ca.crt after change useMoreSecureServiceCA to true $ oc edit kubecontrollermanager cluster useMoreSecureServiceCA: true $ oc get cm openshift-service-ca.crt -o yaml # annotation service.beta.openshift.io/inject-cabundle: "true" is seen, and only one cert in service-ca.crt field, switch to secure mode as expected 3. check useMoreSecureServiceCA cannot be changed back to false $ oc edit kubecontrollermanager cluster useMoreSecureServiceCA: false # it's forbidden to change back to false as expected add test about check env OPENSHIFT_USE_VULNERABLE_LEGACY_SERVICE_CA_CRT existed in KCM pod after upgrade to 4.8 useMoreSecureServiceCA is updated to true in https://bugzilla.redhat.com/show_bug.cgi?id=1981633#c2 causing KCM pods restarted, so the test is to check the env from the last KCM pod yaml before restarted $ oc get po -n openshift-kube-controller-manager -L revision -l revision NAME READY STATUS RESTARTS AGE REVISION kube-controller-manager-ip-10-0-153-18.us-east-2.compute.internal 4/4 Running 0 103m 12 kube-controller-manager-ip-10-0-177-216.us-east-2.compute.internal 4/4 Running 0 102m 12 kube-controller-manager-ip-10-0-215-248.us-east-2.compute.internal 4/4 Running 0 103m 12 $ oc debug no/ip-10-0-153-18.us-east-2.compute.internal sh-4.4# chroot /host sh-4.4# bash [root@ip-10-0-153-18 /]# cd /etc/kubernetes/static-pod-resources/ [root@ip-10-0-153-18 static-pod-resources]# ls -d kube-controller-manager-pod-* kube-controller-manager-pod-10 kube-controller-manager-pod-3 kube-controller-manager-pod-6 kube-controller-manager-pod-9 kube-controller-manager-pod-11 kube-controller-manager-pod-4 kube-controller-manager-pod-7 kube-controller-manager-pod-12 kube-controller-manager-pod-5 kube-controller-manager-pod-8 [root@ip-10-0-153-18 static-pod-resources]# cd kube-controller-manager-pod-11 [root@ip-10-0-153-18 kube-controller-manager-pod-11]# cat kube-controller-manager-pod.yaml | jq '' | grep -i -A 5 vulner "name": "OPENSHIFT_USE_VULNERABLE_LEGACY_SERVICE_CA_CRT", "value": "true" } ], "resources": { "requests": { -- "name": "OPENSHIFT_USE_VULNERABLE_LEGACY_SERVICE_CA_CRT", "value": "true" } ], "resources": { "requests": { -- "name": "OPENSHIFT_USE_VULNERABLE_LEGACY_SERVICE_CA_CRT", "value": "true" } ], "resources": { "requests": { -- "name": "OPENSHIFT_USE_VULNERABLE_LEGACY_SERVICE_CA_CRT", "value": "true" } ], # it can see OPENSHIFT_USE_VULNERABLE_LEGACY_SERVICE_CA_CRT env existed in last KCM pod yaml before restarted As described in https://bugzilla.redhat.com/show_bug.cgi?id=1981633#c2 and https://bugzilla.redhat.com/show_bug.cgi?id=1981633#c3, the bug had been pre-merge verified. Move to VERIFIED status manually. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438 |