Bug 1344881

Summary: [platformmanagement_public_426]The fraction of the layers uploaded should be deleted when the image pushed failed.
Product: OKD Reporter: zhou ying <yinzhou>
Component: Image RegistryAssignee: Michal Minar <miminar>
Status: CLOSED UPSTREAM QA Contact: Wei Sun <wsun>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.xCC: aos-bugs, haowang, mfojtik, pweil
Target Milestone: ---Keywords: UpcomingRelease
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-20 13:51:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description zhou ying 2016-06-12 06:51:31 UTC
Description of problem:
When image size exceeds the maximum limit of the limits, it will  be pushed unsuccessfully, but check the docker-registry there were layers and blobs of the image stored. Should delete  the fraction of the layers that were uploaded .

Version-Release number of selected component (if applicable):
openshift v1.3.0-alpha.1-251-ga19279f
kubernetes v1.3.0-alpha.1-331-g0522e63
etcd 2.3.0

How reproducible:
Always

Steps to Reproduce:
1. Start OpenShift and login;
2. Set the limits for the project:
more limits.yaml 
apiVersion: "v1"
kind: "LimitRange"
metadata:
  name: "openshift-resource-limits"
spec:
  limits:
    -
      type: openshift.io/Image
      max:
        storage: 140Mi
    -
      type: openshift.io/ImageStream
      max:
        openshift.io/image-tags: 5
        openshift.io/images: 8

3. Use the user token login the docker-registry;
4. Tag and push a image which size exceeds the maximum limit of the project;
    `docker tag docker.io/zhouying7780/singlelayer 172.30.233.108:5000/zhouy/singlelayer`
    `docker push 172.30.233.108:5000/zhouy/singlelayer`
5. Check the docker-registry's volume.


Actual results:
5. There were layers and blobs stored in the docker-registry.
docker exec -it 427c5c8e9cde find /registry
/registry
/registry/docker
/registry/docker/registry
/registry/docker/registry/v2
/registry/docker/registry/v2/repositories
/registry/docker/registry/v2/repositories/zhouy
/registry/docker/registry/v2/repositories/zhouy/singlelayer
/registry/docker/registry/v2/repositories/zhouy/singlelayer/_uploads
/registry/docker/registry/v2/repositories/zhouy/singlelayer/_layers
/registry/docker/registry/v2/repositories/zhouy/singlelayer/_layers/sha256
/registry/docker/registry/v2/repositories/zhouy/singlelayer/_layers/sha256/4383798025cde56ad0d232f6c97da3edea0074ad40ced804658fc847c5abcb3e
/registry/docker/registry/v2/repositories/zhouy/singlelayer/_layers/sha256/4383798025cde56ad0d232f6c97da3edea0074ad40ced804658fc847c5abcb3e/link
/registry/docker/registry/v2/repositories/zhouy/singlelayer/_layers/sha256/dc820aecd793e42b94304bc8d00242b79f29496536c6afa92ddd85b7bd7a0d7e
/registry/docker/registry/v2/repositories/zhouy/singlelayer/_layers/sha256/dc820aecd793e42b94304bc8d00242b79f29496536c6afa92ddd85b7bd7a0d7e/link
/registry/docker/registry/v2/blobs
/registry/docker/registry/v2/blobs/sha256
/registry/docker/registry/v2/blobs/sha256/43
/registry/docker/registry/v2/blobs/sha256/43/4383798025cde56ad0d232f6c97da3edea0074ad40ced804658fc847c5abcb3e
/registry/docker/registry/v2/blobs/sha256/43/4383798025cde56ad0d232f6c97da3edea0074ad40ced804658fc847c5abcb3e/data
/registry/docker/registry/v2/blobs/sha256/dc
/registry/docker/registry/v2/blobs/sha256/dc/dc820aecd793e42b94304bc8d00242b79f29496536c6afa92ddd85b7bd7a0d7e
/registry/docker/registry/v2/blobs/sha256/dc/dc820aecd793e42b94304bc8d00242b79f29496536c6afa92ddd85b7bd7a0d7e/data



Expected results:
5. When push failed, the fraction of the layers that were uploaded will be deleted also.

Comment 1 Michal Minar 2016-06-13 06:18:24 UTC
You're right. This is how it should work. Unfortunately we're not there yet. This will have to wait for rework of image pruning https://trello.com/c/3ZmBKDZZ/437-8-refactor-registry-layer-and-signature-pruning.

Comment 2 Michal Fojtik 2016-10-25 14:12:00 UTC
Setting as upcoming release because this is waiting for the card Michal pointed out.

Comment 3 Michal Fojtik 2017-02-01 11:33:32 UTC
Waiting for pruning refactoring still as we don't have available info about these blobs so we can't prune them efficiently.