Bug 1766113 - [3.11.z] Cannot prune registry 'invalid resource name "psd2-anusha/psd2-anusha": [may not contain '/']'
Summary: [3.11.z] Cannot prune registry 'invalid resource name "psd2-anusha/psd2-anush...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 3.11.z
Assignee: Ricardo Maraschini
QA Contact: Wenjing Zheng
URL:
Whiteboard:
Depends On: 1749256
Blocks: 1752513 1755780 1766110
TreeView+ depends on / blocked
 
Reported: 2019-10-28 10:23 UTC by Ricardo Maraschini
Modified: 2020-01-08 18:18 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: older imagestreams could have an invalid name Consequence: image pruning failed because the specs for the imagestream's tags were not valid Fix: image pruner will always prune images if the associated imagestream has an invalid name Result: image pruning completes if imagestreams with invalid names are present
Clone Of: 1749256
Environment:
Last Closed: 2019-12-16 11:57:11 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift image-registry pull 199 'None' closed Bug 1766113: Checking if image name is valid, prune otherwise. 2020-05-12 01:23:11 UTC
Red Hat Product Errata RHBA-2019:4050 None None None 2019-12-16 11:57:28 UTC

Comment 1 Wenjing Zheng 2019-11-20 07:51:08 UTC
It seems the bug is not fixed on openshift v3.11.153:
time="2019-11-20T07:39:57.920274926Z" level=info msg="start prune (dry-run mode)" distribution_version=v2.6.2+unknown go.version=go1.9.7 instance.id=3c52f428-5ccc-454b-9886-2f367058099a openshift_version=v3.11.154 
time="2019-11-20T07:39:58.591857027Z" level=error msg="s3aws: invalid path: /docker/registry/v2/repositories/foo/" 
Would delete 0 blobs
Would free up 0B of disk space
Use -prune=delete to actually delete the data
command terminated with exit code 1

Comment 2 Ricardo Maraschini 2019-11-21 11:00:27 UTC
I assume you are testing using the following procedure(copied and pasted from a previous comment):

1. Import some image, it'll create something like /docker/registry/v2/repositories/NAMESPACE/NAME 
2. Copy these files as docker/registry/v2/repositories/foo/bar/baz

Unfortunately this test wont work. S3 storage drivers on OpenShift versions < 4.1 are way more sensitive with the bucket content than later versions. It does not behave well if you copy or move files around. I suggest you to use the file-system storage inside the container to do this test, the procedure would be something like:

1. Deploy the image registry without s3.
2. Import some image as before.
3. Shell into the image registry container and do the copy.
4. Run the pruner.


For reference:

When listing contents inside a repository called image0 that has been created by image registry we can see:

[rmarasch@localhost sample]$ AWS_REGION=us-east-1 go run main.go 
docker/registry/v2/repositories/myproject/image0/_layers
docker/registry/v2/repositories/myproject/image0/_manifests

Now doing the same, but against a copy of image0 repository(caller here image1):

[rmarasch@localhost sample]$ AWS_REGION=us-east-1 go run main.go 
docker/registry/v2/repositories/myproject/image1/
docker/registry/v2/repositories/myproject/image1/_layers
docker/registry/v2/repositories/myproject/image1/_manifests

As we can see, there is an extra entry(for the directory) that was not present when using the original repository. This extra entry(as it contains a / at the end) makes the prune explode on version 3.11.

Comment 3 Wenjing Zheng 2019-11-26 06:26:59 UTC
Thanks for your suggestion, Ricardo!
Verified with below version:
openshift v3.11.156

[root@ip-172-18-8-190 wzhengtest]# oc adm policy add-cluster-role-to-user system:image-pruner system:serviceaccount:default:registry
cluster role "system:image-pruner" added: "system:serviceaccount:default:registry"
[root@ip-172-18-8-190 wzhengtest]# oc rsh docker-registry-1-zhwxv
sh-4.2$mkdir wzheng-ruby-ex;cp -R registry/docker/registry/v2/repositories/wzhengtest/ruby-ex registry/docker/registry/v2/repositories/wzhengtest/wzheng-ruby-ex
sh-4.2$ REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check
INFO[0000] DEPRECATED: "OPENSHIFT_DEFAULT_REGISTRY" is deprecated, use the 'REGISTRY_OPENSHIFT_SERVER_ADDR' instead 
INFO[0000] start prune (dry-run mode)                    distribution_version=v2.6.2+unknown go.version=go1.9.7 instance.id=86aa58a1-d6d5-4229-879a-ebf04f1c8a1c openshift_version=v3.11.156
INFO[0000] Invalid image name wzhengtest/wzheng-ruby-ex/ruby-ex, removing whole repository  go.version=go1.9.7 instance.id=86aa58a1-d6d5-4229-879a-ebf04f1c8a1c
INFO[0000] Would delete repository: wzhengtest/wzheng-ruby-ex/ruby-ex  go.version=go1.9.7 instance.id=86aa58a1-d6d5-4229-879a-ebf04f1c8a1c
Would delete 0 blobs
Would free up 0B of disk space
Use -prune=delete to actually delete the data

Comment 5 errata-xmlrpc 2019-12-16 11:57:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:4050


Note You need to log in before you can comment on or make changes to this bug.