Bug 1483930
Summary: | [trello_He2j63p0] Registry hard prune doesn't work with aws s3 storage | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | ge liu <geliu> |
Component: | Image Registry | Assignee: | Alexey Gladkov <agladkov> |
Status: | CLOSED ERRATA | QA Contact: | Dongbo Yan <dyan> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.7.0 | CC: | agladkov, aos-bugs, bparees, dyan, geliu |
Target Milestone: | --- | ||
Target Release: | 3.7.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-11-28 22:07:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
ge liu
2017-08-22 09:48:53 UTC
I can't reproduce this. Are you sure that another pruning process (not-hard prune from cronjob?) did not work in parallel ? oadm prune command cannot prune orphan blobs, # oadm prune images --certificate-authority=ca.crt --registry-url=docker-registry-default.com --keep-younger-than=0 --confirm I can reproduce on EC2 cluster with S3 registry storage, you can see check orphan blobs successfully, but cannot prune any blobs # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check Would delete 346 blobs Would free up 1.478 GiB of disk space Use -prune=delete to actually delete the data # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete WARN[0021] No repositories found: s3aws: Path not found: /docker/registry/v2/repositories/install-test/nodejs-mongodb-example/_manifests go.version=go1.8.3 instance.id=fd2f3808-2250-4ac4-9d74-610c59ec97b9 Deleted 0 blobs Freed up 0 B of disk space So that means hard pruning doesn't support registry with S3 storage. Better to add some explanation, I will file a docs bug to track it. This bug could be moved to verified I am moving this back to assigned until Alexey confirms that hard prune does not work with s3 after this change, that doesn't sound right to me. The change was only supposed to resolve an ordering issue when pruning s3 storage. Also, Alexey, please link your PRs in the bug so we have a reference back to the changes that fixed the bug. in this case the PR was https://github.com/openshift/origin/pull/17020 (In reply to Dongbo Yan from comment #8) > So that means hard pruning doesn't support registry with S3 storage. Better > to add some explanation, I will file a docs bug to track it. > This bug could be moved to verified In the comment #с0, it was shown pruning found 346 blobs (137.8 MiB), but could not delete it. It was a bug. Pruning should show same statistic with -prune=check and -prune=delete. In the comment #c6 pruning didn't find anything and therefore didn't delete anything. It's not a bug. It's important to know that WARN != ERR. It's ok for s3 storage not to have /docker/registry/v2/repositories because that's how the storage works. > So that means hard pruning doesn't support registry with S3 storage. No, it supports. Verified with S3 storage, it works openshift v3.7.0-0.191.0 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8 # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check Would delete 168 blobs Would free up 622.9 MiB of disk space Use -prune=delete to actually delete the data # oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete Deleted 168 blobs Freed up 622.9 MiB of disk space Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188 |