It seems the bug is not fixed on openshift v3.11.153: time="2019-11-20T07:39:57.920274926Z" level=info msg="start prune (dry-run mode)" distribution_version=v2.6.2+unknown go.version=go1.9.7 instance.id=3c52f428-5ccc-454b-9886-2f367058099a openshift_version=v3.11.154 time="2019-11-20T07:39:58.591857027Z" level=error msg="s3aws: invalid path: /docker/registry/v2/repositories/foo/" Would delete 0 blobs Would free up 0B of disk space Use -prune=delete to actually delete the data command terminated with exit code 1
I assume you are testing using the following procedure(copied and pasted from a previous comment): 1. Import some image, it'll create something like /docker/registry/v2/repositories/NAMESPACE/NAME 2. Copy these files as docker/registry/v2/repositories/foo/bar/baz Unfortunately this test wont work. S3 storage drivers on OpenShift versions < 4.1 are way more sensitive with the bucket content than later versions. It does not behave well if you copy or move files around. I suggest you to use the file-system storage inside the container to do this test, the procedure would be something like: 1. Deploy the image registry without s3. 2. Import some image as before. 3. Shell into the image registry container and do the copy. 4. Run the pruner. For reference: When listing contents inside a repository called image0 that has been created by image registry we can see: [rmarasch@localhost sample]$ AWS_REGION=us-east-1 go run main.go docker/registry/v2/repositories/myproject/image0/_layers docker/registry/v2/repositories/myproject/image0/_manifests Now doing the same, but against a copy of image0 repository(caller here image1): [rmarasch@localhost sample]$ AWS_REGION=us-east-1 go run main.go docker/registry/v2/repositories/myproject/image1/ docker/registry/v2/repositories/myproject/image1/_layers docker/registry/v2/repositories/myproject/image1/_manifests As we can see, there is an extra entry(for the directory) that was not present when using the original repository. This extra entry(as it contains a / at the end) makes the prune explode on version 3.11.
Thanks for your suggestion, Ricardo! Verified with below version: openshift v3.11.156 [root@ip-172-18-8-190 wzhengtest]# oc adm policy add-cluster-role-to-user system:image-pruner system:serviceaccount:default:registry cluster role "system:image-pruner" added: "system:serviceaccount:default:registry" [root@ip-172-18-8-190 wzhengtest]# oc rsh docker-registry-1-zhwxv sh-4.2$mkdir wzheng-ruby-ex;cp -R registry/docker/registry/v2/repositories/wzhengtest/ruby-ex registry/docker/registry/v2/repositories/wzhengtest/wzheng-ruby-ex sh-4.2$ REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check INFO[0000] DEPRECATED: "OPENSHIFT_DEFAULT_REGISTRY" is deprecated, use the 'REGISTRY_OPENSHIFT_SERVER_ADDR' instead INFO[0000] start prune (dry-run mode) distribution_version=v2.6.2+unknown go.version=go1.9.7 instance.id=86aa58a1-d6d5-4229-879a-ebf04f1c8a1c openshift_version=v3.11.156 INFO[0000] Invalid image name wzhengtest/wzheng-ruby-ex/ruby-ex, removing whole repository go.version=go1.9.7 instance.id=86aa58a1-d6d5-4229-879a-ebf04f1c8a1c INFO[0000] Would delete repository: wzhengtest/wzheng-ruby-ex/ruby-ex go.version=go1.9.7 instance.id=86aa58a1-d6d5-4229-879a-ebf04f1c8a1c Would delete 0 blobs Would free up 0B of disk space Use -prune=delete to actually delete the data
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:4050