Description of problem: The default installation of the internal registry for IPI on public clouds Google Cloud should use client redirection to send a client directly to the backend data store (GCS). See https://docs.openshift.com/container-platform/4.8/registry/configuring-registry-operator.html (disableRedirect=false by default). On 4.10.5, this redirection does not occur on GCP clusters and the image-registry pod directly proxies the data through itself. This drives a significantly higher load on the pod and dramatically reduces the scalability of the registry. Version-Release number of selected component (if applicable): v4.10.5 How reproducible: 100% Steps to Reproduce: 1. Install an IPI GCP v4.10.5 cluster 2. Expose a default-route for the registry 3. Use curl to download a blob from one of the internal registry images. Actual results: The blob content is received from the proxy. Expected results: curl should report a 307 redirection back to google cloud storage. Additional info:
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug. When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label. The expectation is that the assignee answers these questions. Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time What is the impact? Is it serious enough to warrant blocking edges? example: Up to 2 minute disruption in edge routing example: Up to 90 seconds of API downtime example: etcd loses quorum and you have to restore from backup How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? example: Issue resolves itself after five minutes example: Admin uses oc to fix things example: Admin must SSH to hosts, restore from backups, or other non standard admin activities Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? example: No, it has always been like this we just never noticed example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1
Also, it'd be helpful to understand if root cause here is the same as in Bug 2065224
> Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? All customers using GCP with GCS as the backend for the internal registry with `disableRedirect=false` (the default). > What is the impact? Is it serious enough to warrant blocking edges? Dramatic reduction of internal registry scalability / increased cloud costs. Impact depends on usage. > How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? No mitigation. > Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? Regressed from 4.9.
Test on 4.11.0-0.ci.test-2022-03-25-055044-ci-ln-yt1z10t-latest gcp cluster. export token=$(oc sa get-token builder) export route=$(oc get route -n openshift-image-registry -o jsonpath='{..spec.host}' curl -kv GET -H "Authorization: Bearer $token" -H "Accept: application/json" https://$route/v2/openshift/httpd/blobs/sha256:3de00bb8554b2c35c89412d5336f1fa469afc7b0160045dd08758d92c8a6b064 | grep location < location: https://storage.googleapis.com/ci-ln-yt1z10t-72292-jz7lx-image-registry-us-central1-nkdbyjtlx/docker/registry/v2/blobs/sha256/3d/3de00bb8554b2c35c89412d5336f1fa469afc7b0160045dd08758d92c8a6b064/data?Expires=1648200894&GoogleAccessId=ci-ln-yt1z10-openshift-i-qww47%40openshift-gce-devel-ci.iam.gserviceaccount.com&Signature=ipUreg0omvCT92U2dto9y2hvRmQmmY0ey7b4uvFDlkiJzCUILsP8c8j9iEbMRfipUgmdtVZkRVcuoMWL2odRTTj5ib0Gbv%2B%2B%2FWakmquFx5KBH%2FInBAvfxgz5PQjH83ElxEPXWaWPt7my54yaftCx09b59sDn3pV9glr68oIGXNq%2BtOjexBr0hvIby%2BwO6W9tat1COA%2FaBC818AUWmamHn3IyywSCrHSnA3qReKa1z%2Bi3ZQdRpNI1FREHfmrPOVcL7O97836ez6NpfrKM3folVByyPYC0%2F%2FUz51zMMuq9IElHPgRWOy%2B33Or7NAgjNq2Xd%2BdcHuafkN%2FYoRrhtULYqQ%3D%3D I will test more object storage, then add approve on pr
[1] is doc'ing the impact of the revert for folks updating within 4.10, and bug 2070791 is tracking impact for clusters with GCP workload identity enabled, so we will probably not change update recommendations for this bug. Dropping UpgradeBlocker. Feel free to restore the keyword if new information comes up. [1]: https://github.com/openshift/openshift-docs/pull/44062
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069