Description of problem: As title, got error msg: No repositories found: s3aws: Path not found: /docker/registry/v2/repositories/lgp/pod-for-ping/_manifests go.version=go1.8.3 instance.id=99e14e7c-215d-4b26-9c93-419401cbf0b8 openshift v3.7.0-0.104.0 kubernetes v1.7.0+695f48a16f etcd 3.2.1 How reproducible: Always Steps to Reproduce: 1. Push some images to the integrated docker registry. 2. Import some other images and pull them via integrated registry (to mirror the blobs). 3. Delete some images from both groups using oc delete. 4. Turn the registry into read-only mode (see documentation). 5. Verify registry is usable for pulls but not for pushes. 6. Run the hard prune. # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT ...... REGISTRY_CONSOLE_SERVICE_PORT_REGISTRY_CONSOLE Would delete 70 blobs Would free up 137.8 MiB of disk space Use -prune=delete to actually delete the data [root@ip-172-18-11-146 ~]# oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT ................. WARN[0008] No repositories found: s3aws: Path not found: /docker/registry/v2/repositories/lgp/pod-for-ping/_manifests go.version=go1.8.3 instance.id=99e14e7c-215d-4b26-9c93-419401cbf0b8 Deleted 0 blobs Freed up 0 B of disk space setup: # oc exec -it docker-registry-3-ghl6x more /etc/registry/config.yml version: 0.1 log: level: debug http: addr: :5000 storage: delete: enabled: true cache: blobdescriptor: inmemory s3: accesskey: xxxx.... secretkey: gxxxxxxxxx...... region: xxx bucket: openshiftxxxxx encrypt: False secure: true v4auth: true rootdirectory: /registry chunksize: "26214400" auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: pullthrough: True acceptschema2: True enforcequota: False storage: - name: openshift Actual results: Registry hard prune doesn't work with aws s3 storage Expected results: Registry hard prune should work with aws s3 storage
I can't reproduce this. Are you sure that another pruning process (not-hard prune from cronjob?) did not work in parallel ?
oadm prune command cannot prune orphan blobs, # oadm prune images --certificate-authority=ca.crt --registry-url=docker-registry-default.com --keep-younger-than=0 --confirm I can reproduce on EC2 cluster with S3 registry storage, you can see check orphan blobs successfully, but cannot prune any blobs # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check Would delete 346 blobs Would free up 1.478 GiB of disk space Use -prune=delete to actually delete the data # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete WARN[0021] No repositories found: s3aws: Path not found: /docker/registry/v2/repositories/install-test/nodejs-mongodb-example/_manifests go.version=go1.8.3 instance.id=fd2f3808-2250-4ac4-9d74-610c59ec97b9 Deleted 0 blobs Freed up 0 B of disk space
So that means hard pruning doesn't support registry with S3 storage. Better to add some explanation, I will file a docs bug to track it. This bug could be moved to verified
I am moving this back to assigned until Alexey confirms that hard prune does not work with s3 after this change, that doesn't sound right to me. The change was only supposed to resolve an ordering issue when pruning s3 storage. Also, Alexey, please link your PRs in the bug so we have a reference back to the changes that fixed the bug. in this case the PR was https://github.com/openshift/origin/pull/17020
(In reply to Dongbo Yan from comment #8) > So that means hard pruning doesn't support registry with S3 storage. Better > to add some explanation, I will file a docs bug to track it. > This bug could be moved to verified In the comment #с0, it was shown pruning found 346 blobs (137.8 MiB), but could not delete it. It was a bug. Pruning should show same statistic with -prune=check and -prune=delete. In the comment #c6 pruning didn't find anything and therefore didn't delete anything. It's not a bug. It's important to know that WARN != ERR. It's ok for s3 storage not to have /docker/registry/v2/repositories because that's how the storage works. > So that means hard pruning doesn't support registry with S3 storage. No, it supports.
Verified with S3 storage, it works openshift v3.7.0-0.191.0 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8 # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check Would delete 168 blobs Would free up 622.9 MiB of disk space Use -prune=delete to actually delete the data # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete Deleted 168 blobs Freed up 622.9 MiB of disk space
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188