Bug 1471844

Summary: [3.4][Registry][Pruning] Orphaned blobs cannot be pruned
Product: OpenShift Container Platform Reporter: Michal Minar <miminar>
Component: Image RegistryAssignee: Michal Minar <miminar>
Status: CLOSED ERRATA QA Contact: ge liu <geliu>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.4.1CC: aos-bugs, clichybi, dcaldwel, dmoessne, erich, geliu, jkaur, mfojtik, miminar, misalunk, pdwyer, stwalter, xtian, yinzhou
Target Milestone: ---Keywords: Reopened
Target Release: 3.4.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: There was no way to prune orphaned blobs on integrated registry's storage. Consequence: The orphaned blobs could pile up and consume a considerable amount of free space. Fix: We provide a new low-level utility that is run inside of registry's container and removes the orphaned blobs. Result: Customers are now able to remove orphaned blobs retrieve storage space.
Story Points: ---
Clone Of: 1467340 Environment:
Last Closed: 2017-08-31 17:00:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1408676, 1467340, 1472438, 1479340, 1499314, 1499315    
Bug Blocks:    

Comment 2 Michal Minar 2017-07-18 11:25:22 UTC
PR merged.

Comment 4 Michal Minar 2017-07-19 08:37:28 UTC
So it looks like the OCP builds have the binary under /usr/bin.
Could you try the prune command once again with a slight modification:

  pod="$(oc -n default get pods -l deploymentconfig=docker-registry \
        -o jsonpath=$'{.items[0].metadata.name}\n')"
  oc -n default exec -i -t "${pod}" -- /usr/bin/dockerregistry -prune=delete

?

Comment 5 ge liu 2017-07-19 09:32:09 UTC
@miminar, it works well now, the extended steps also passed, and pls update the doc above for it if necessary, thx

the registry read-only set,unset works well and run the hard prune successfully:

# pod="$(oc -n default get pods -l deploymentconfig=docker-registry \
>         -o jsonpath=$'{.items[0].metadata.name}\n')"

[root@qe-geliu-34master-registry-router-nfs-1 ~]# oc -n default exec -i -t "${pod}" -- /usr/bin/dockerregistry -prune=check

WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP_ADDR 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP_PORT 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP_PROTO 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_SERVICE_HOST 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_SERVICE_PORT 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_SERVICE_PORT_REGISTRY_CONSOLE 
Would delete 11 blobs
Would free up 3.028 MiB of disk space
Use -prune=delete to actually delete the data

[root@qe-geliu-34master-registry-router-nfs-1 ~]# oc -n default exec -i -t "${pod}" -- /usr/bin/dockerregistry -prune=delete
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP_ADDR 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP_PORT 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_PORT_9000_TCP_PROTO 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_SERVICE_HOST 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_SERVICE_PORT 
WARN[0000] Ignoring unrecognized environment variable REGISTRY_CONSOLE_SERVICE_PORT_REGISTRY_CONSOLE 
Deleted 11 blobs
Freed up 3.028 MiB of disk space

Comment 7 errata-xmlrpc 2017-08-31 17:00:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1828

Comment 8 Michal Minar 2017-10-09 07:36:28 UTC
*** Bug 1499314 has been marked as a duplicate of this bug. ***