Description of problem: The image-registry does not enable CloudFront as middleware when it is included in configs.imageregistry.operator.openshift.io . spec: storage: s3: cloudFront: baseURL: https://....cloudfront.net/ keypairID: .... duration: 300s privateKey: name: ...-registry-private-key key: ...-registry-private-key.pem The image-registry pods will restart and will contain a set of environment variables that are intended to carry the middleware configuration to the dockerregistry entrypoint. However, these are ignored. When debugging the code, you can see that REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_BASEURL is not parsed into the configuration structure as one might expect from the dockerregistry documentation. I believe the reason is that registry.middleware.storage is a list and not a simple map. https://stackoverflow.com/a/69089002/6431503 describes a fix. Version-Release number of selected component (if applicable): v4.10.5 How reproducible: 100% Steps to Reproduce: 1. Install an ipi OpenShift v4 cluster in AWS 2. Edit configs.imageregistry.operator.openshift.io a. Expose a public route for the registry (spec.defaultRoute=true) b. Increase the logging level to Trace (spec.logLevel: Trace) c. Enable cloudFront (configs.imageregistry.operator.openshift.io.spec.storage.s3.cloudFront) 3. Allow the image-registry pods to restart 4. docker pull default-route.<registry>/openshift/tests 5. Review image-registry pod / logs Actual results: 1. The cloudFront configuration does not take effect. 2. The "REGISTRY_MIDDLEWARE_STORAGE_CLOUDFRONT_DURATION" environment is corrupted as something like '''&Duration{Duration:300s,}''' Expected results: cloudFront configuration takes effect and image blob GETs are redirected to AWS cloudFront.
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug. When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label. The expectation is that the assignee answers these questions. Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time What is the impact? Is it serious enough to warrant blocking edges? example: Up to 2 minute disruption in edge routing example: Up to 90 seconds of API downtime example: etcd loses quorum and you have to restore from backup How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? example: Issue resolves itself after five minutes example: Admin uses oc to fix things example: Admin must SSH to hosts, restore from backups, or other non standard admin activities Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? example: No, it has always been like this we just never noticed example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1
Adding the ImpactStatementRequested label to catch up with comment 3.
> Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? All customers trying to use cloudFront middleware for the internal registry. > What is the impact? Is it serious enough to warrant blocking edges? Assuming cloudFront was working, the bug would cause the registry to fall back to s3 based content distribution. This could cause a dramatic decrease in throughput for worldwide distribution and could increase cloud costs depending on usage. > How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? There is no mitigation. > Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? Unknown.
Removing the upgrade blocker keyword because we are not planning to block edges because of this bug.
Lowering severity and priority, it works this way at least since 4.5. Most likely it's not a regression.
Comment 11 says this is not a regression therefore clearing that keyword and upgrades keyword.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069