Hide Forgot
Description of problem: On On premise cluster with Scality S3 implementation. After configuring registry storage to S3, registry logs start to show errors (see below). Anyway, registry seems to work as expected. ~~~ time="{ANONYMIZED}" level=info msg="PurgeUploads starting: olderThan={ANONYMIZED} m=-604739.966203103, actuallyDelete=true" panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xe0d1bd] goroutine 30 [running]: github.com/docker/distribution/registry/storage/driver/s3-aws.(*driver).doWalk.func1(0xc0006ea200, 0xc000364901, 0x7f82c294f108) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/storage/driver/s3-aws/s3.go:1014 +0x9d github.com/aws/aws-sdk-go/service/s3.(*S3).ListObjectsV2PagesWithContext(0xc0002097c0, 0x1f2d1a0, 0xc0009c01c0, 0xc000126280, 0xc000bcf9c0, 0x0, 0x0, 0x0, 0x50f7e5, 0x1a393c0) /go/src/github.com/openshift/image-registry/vendor/github.com/aws/aws-sdk-go/service/s3/api.go:6104 +0x16a github.com/docker/distribution/registry/storage/driver/s3-aws.(*driver).doWalk(0xc000213380, 0x1f2d1a0, 0xc0009c0150, 0xc00082bab0, 0xc000139141, 0x20, 0x1c12ef6, 0x1, 0xc0008ae1e0, 0x0, ...) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/storage/driver/s3-aws/s3.go:1012 +0x377 github.com/docker/distribution/registry/storage/driver/s3-aws.(*driver).Walk(0xc000213380, 0x1f2d1a0, 0xc0009c0150, 0xc000dae160, 0x20, 0xc0008ae1e0, 0xc000dae160, 0x20) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/storage/driver/s3-aws/s3.go:960 +0x172 github.com/docker/distribution/registry/storage/driver/base.(*Base).Walk(0xc0005e1ce0, 0x1f2d1a0, 0xc0009c0150, 0xc000dae160, 0x20, 0xc0008ae1e0, 0x0, 0x0) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/storage/driver/base/base.go:239 +0x28c github.com/docker/distribution/registry/storage.getOutstandingUploads(0x1f2d2e0, 0xc0007fc3c0, 0x1f47c60, 0xc0005e1ce0, 0xc0006ccde8, 0x2, 0x2, 0xd) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/storage/purgeuploads.go:70 +0x24c github.com/docker/distribution/registry/storage.PurgeUploads(0x1f2d2e0, 0xc0007fc3c0, 0x1f47c60, 0xc0005e1ce0, 0xc025580bf750dff3, 0xfffdd9fe01220b21, 0x2aa7fc0, 0x1, 0x0, 0x0, ...) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/storage/purgeuploads.go:34 +0x15d github.com/docker/distribution/registry/handlers.startUploadPurger.func1(0x1f50940, 0xc0007ff2d0, 0x1f2d2e0, 0xc0007fc3c0, 0x1f47c60, 0xc0005e1ce0, 0x2260ff9290000, 0x0, 0x4e94914f0000) /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers/app.go:1123 +0x22d created by github.com/docker/distribution/registry/handlers.startUploadPurger /go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers/app.go:1116 +0x2d8 ~~~ Version-Release number of selected component (if applicable): OCP 4.6.28 How reproducible: Always Steps to Reproduce: 1. Install cluster. 2. Configure registry storage ~~~ apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: finalizers: - imageregistry.operator.openshift.io/finalizer generation: 5 name: cluster spec: ... storage: managementState: Unmanaged s3: bucket: {bucket-name} region: {region-name} regionEndpoint: {s3-endpoint} ... ~~~ Actual results: Error in logs Expected results: No error in logs Additional info: S3 storage is on premise scality solution.
Can you reproduce it on AWS? We don't support 3rd party storage solutions.
Is it expected that this will be merged soon?
This issue is scheduled to the next sprint and expected to be fixed by March 11.
Thank you very much, Oleg.
Do registry regression test on vsphere 4.11.0-0.nightly-2022-03-04-063157 cluster which configures s3 storage for image registry. No issue founds. https://polarion.engineering.redhat.com/polarion/#/project/OSE/testrun?id=20220307-0629
After re-check registry with ODF ceph rgw on 4.11.0-0.nightly-2022-03-13-055724, it prompts 403 error when push image. Assign this bug back.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069