Bug 1483930 - [trello_He2j63p0] Registry hard prune doesn't work with aws s3 storage
Summary: [trello_He2j63p0] Registry hard prune doesn't work with aws s3 storage
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 3.7.0
Assignee: Alexey Gladkov
QA Contact: Dongbo Yan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-22 09:48 UTC by ge liu
Modified: 2017-11-28 22:07 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-28 22:07:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:3188 0 normal SHIPPED_LIVE Moderate: Red Hat OpenShift Container Platform 3.7 security, bug, and enhancement update 2017-11-29 02:34:54 UTC

Description ge liu 2017-08-22 09:48:53 UTC
Description of problem:

As title, got error msg:  No repositories found: s3aws: Path not found: /docker/registry/v2/repositories/lgp/pod-for-ping/_manifests  go.version=go1.8.3 instance.id=99e14e7c-215d-4b26-9c93-419401cbf0b8

openshift v3.7.0-0.104.0
kubernetes v1.7.0+695f48a16f
etcd 3.2.1

How reproducible:
Always

Steps to Reproduce:
1. Push some images to the integrated docker registry.
2. Import some other images and pull them via integrated registry (to mirror the blobs).
3. Delete some images from both groups using oc delete.
4. Turn the registry into read-only mode (see documentation).
5. Verify registry is usable for pulls but not for pushes.
6. Run the hard prune.

# oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT 
......
REGISTRY_CONSOLE_SERVICE_PORT_REGISTRY_CONSOLE 
Would delete 70 blobs
Would free up 137.8 MiB of disk space
Use -prune=delete to actually delete the data
[root@ip-172-18-11-146 ~]# oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT 
.................
WARN[0008] No repositories found: s3aws: Path not found: /docker/registry/v2/repositories/lgp/pod-for-ping/_manifests  go.version=go1.8.3 instance.id=99e14e7c-215d-4b26-9c93-419401cbf0b8
Deleted 0 blobs
Freed up 0 B of disk space

setup:

# oc exec -it docker-registry-3-ghl6x more /etc/registry/config.yml
version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  delete:
    enabled: true
  cache:
    blobdescriptor: inmemory
  s3:
    accesskey: xxxx....
    secretkey: gxxxxxxxxx......
    region: xxx
    bucket: openshiftxxxxx
    encrypt: False
    secure: true
    v4auth: true
    rootdirectory: /registry
    chunksize: "26214400"
auth:
  openshift:
    realm: openshift
middleware:  
  registry:  
  - name: openshift
  repository:
  - name: openshift
    options: 
      pullthrough: True
      acceptschema2: True
      enforcequota: False
  storage:   
  - name: openshift


Actual results:
Registry hard prune doesn't work with aws s3 storage
Expected results:
Registry hard prune should work with aws s3 storage

Comment 3 Alexey Gladkov 2017-10-17 12:22:47 UTC
I can't reproduce this.

Are you sure that another pruning process (not-hard prune from cronjob?) did not work in parallel ?

Comment 4 Dongbo Yan 2017-10-18 09:50:09 UTC
oadm prune command cannot prune orphan blobs,
# oadm prune images --certificate-authority=ca.crt --registry-url=docker-registry-default.com --keep-younger-than=0 --confirm 

I can reproduce on EC2 cluster with S3 registry storage, you can see check orphan blobs successfully, but cannot prune any blobs

# oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check
 
Would delete 346 blobs
Would free up 1.478 GiB of disk space
Use -prune=delete to actually delete the data

# oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete
 
WARN[0021] No repositories found: s3aws: Path not found: /docker/registry/v2/repositories/install-test/nodejs-mongodb-example/_manifests  go.version=go1.8.3 instance.id=fd2f3808-2250-4ac4-9d74-610c59ec97b9
Deleted 0 blobs
Freed up 0 B of disk space

Comment 8 Dongbo Yan 2017-11-03 02:03:58 UTC
So that means hard pruning doesn't support registry with S3 storage. Better to add some explanation, I will file a docs bug to track it.
This bug could be moved to verified

Comment 9 Ben Parees 2017-11-03 02:22:53 UTC
I am moving this back to assigned until Alexey confirms that hard prune does not work with s3 after this change, that doesn't sound right to me.  The change was only supposed to resolve an ordering issue when pruning s3 storage.


Also, Alexey, please link your PRs in the bug so we have a reference back to the changes that fixed the bug.

in this case the PR was https://github.com/openshift/origin/pull/17020

Comment 10 Alexey Gladkov 2017-11-03 13:24:26 UTC
(In reply to Dongbo Yan from comment #8)
> So that means hard pruning doesn't support registry with S3 storage. Better
> to add some explanation, I will file a docs bug to track it.
> This bug could be moved to verified

In the comment #с0, it was shown pruning found 346 blobs (137.8 MiB), but could not delete it. It was a bug. Pruning should show same statistic with -prune=check and -prune=delete.

In the comment #c6 pruning didn't find anything and therefore didn't delete anything. It's not a bug.

It's important to know that WARN != ERR. It's ok for s3 storage not to have /docker/registry/v2/repositories because that's how the storage works.

> So that means hard pruning doesn't support registry with S3 storage.

No, it supports.

Comment 12 Dongbo Yan 2017-11-06 07:03:40 UTC
Verified with S3 storage, it works
openshift v3.7.0-0.191.0
kubernetes v1.7.6+a08f5eeb62
etcd 3.2.8

# oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=check

Would delete 168 blobs
Would free up 622.9 MiB of disk space
Use -prune=delete to actually delete the data

# oc -n default exec -i -t "$(oc -n default get pods -l deploymentconfig=docker-registry -o jsonpath=$'{.items[0].metadata.name}\n')" -- /usr/bin/dockerregistry -prune=delete
 
Deleted 168 blobs
Freed up 622.9 MiB of disk space

Comment 16 errata-xmlrpc 2017-11-28 22:07:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188


Note You need to log in before you can comment on or make changes to this bug.