Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionDavid Darrah/Red Hat QE
2016-08-19 13:22:10 UTC
Description of problem:
This very well may be a bug with the docker registry and not skopeo/atomic, but wanted to open a bug to track the issue.
If you used 'atomic images delete --remote' to delete an image, while the image becomes immediately unavailable on the remote registry you cannot re-push the image to the registry even after garbage collection.
tcpdump shows the atomic client communicating to the registry when the push is attempted.
Version-Release number of selected component (if applicable):
atomic 1.11.0
skopeo version 0.1.14-dev
registry github.com/docker/distribution v2.5.0 (latest registry container)
How reproducible:
100%
Steps to Reproduce:
Setup
Host A - docker 1.10, running latest docker.io/registry as of last thurs. (rhel-base:5000)
docker run -d -e REGISTRY_STORAGE_DELETE_ENABLED=true -p 5000:5000 docker.io/registry
Host B running docker 1.10 with Atomic 1.11 and skopeo 0.1.14
Host C running docker 1.10
All hosts are RHEL.
Procedure
1. on Host C, build image from Dockerfile and tag with host A registry info:
FROM busybox
entrypoint /bin/who
[root@atomic-registry builder]# docker build -t rhel-base:5000/busyboxwhom .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM busybox
---> 2b8fd9751c4c
Step 2 : ENTRYPOINT /bin/who
---> Using cache
---> 0a18ce01aa55
Successfully built 0a18ce01aa55
2. Push from Host C to Host A
[root@atomic-registry builder]# docker push rhel-base:5000/busyboxwhom
The push refers to a repository [rhel-base:5000/busyboxwhom]
8ac8bfaff55a: Layer already exists
latest: digest: sha256:9d7f22bb0cbb71bc7bc03cfbda4592c21aedfb461d27d0dbd56fd0b6eb4ba77d size: 505
3. On host B, call 'atomic images delete --remote' on image on A - indicates that the image is tagged for deletion
[root@fedora_vm ~]# atomic images delete --remote rhel-base:5000/busyboxwhom
Do you wish to delete ['rhel-base:5000/busyboxwhom']? (y/N) y
Image docker://rhel-base:5000/busyboxwhom marked for deletion
4. On host A, call garbage collection for running registry container - it reports blob deletion
[root@rhel-base build-image]# docker exec -it e1ee1c36997a bin/registry garbage-collect /etc/docker/registry/config.yml
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/8d/8ddc19f16526912237dd8af81971d5e4dd0587907234be2b83e249518d5b673f go.version=go1.6.3 instance.id=3a148b58-0249-4391-8f09-457ebbec5230
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 go.version=go1.6.3 instance.id=3a148b58-0249-4391-8f09-457ebbec5230
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/c4/c43b7b7f0e356a6c202705e96f640cffa32b15c4178d5ff67d73012b9b43ffe5 go.version=go1.6.3 instance.id=3a148b58-0249-4391-8f09-457ebbec5230
5. On host C, delete image build in step 1 then rebuild it with same Docker file.
[root@atomic-registry builder]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/openshift/origin-docker-registry latest 579eac409085 24 minutes ago 349.1 MB
docker.io/busybox latest 2b8fd9751c4c 7 weeks ago 1.093 MB
[root@atomic-registry builder]# docker rmi 579eac409085
Untagged: docker.io/openshift/origin-docker-registry:latest
Deleted: sha256:579eac409085392c9ca498566d9b61531245ffd63e6078ff99eaf729e3c7036a
Deleted: sha256:34db1304880695eabcb7c8247a58db70e46c739032a74cb8ea8dee023275e321
Deleted: sha256:ebe5d2de1b01e30ce43a5602317c6ce0ad8df41cbb6f47dd5c2b308cd0e12a9d
[root@atomic-registry builder]# docker build -t rhel-base:5000/busyboxwhom .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM busybox
---> 2b8fd9751c4c
Step 2 : ENTRYPOINT /bin/who
---> Using cache
---> 0a18ce01aa55
Successfully built 0a18ce01aa55
6. Attempt to push image to registry on A - get error that layer exists
[root@atomic-registry builder]# docker push rhel-base:5000/busyboxwhom
The push refers to a repository [rhel-base:5000/busyboxwhom]
8ac8bfaff55a: Layer already exists
latest: digest: sha256:9d7f22bb0cbb71bc7bc03cfbda4592c21aedfb461d27d0dbd56fd0b6eb4ba77d size: 505
7. Attempt to pull the image to host B, no image exists
[root@fedora_vm ~]# docker pull rhel-base:5000/busyboxwhom
Using default tag: latest
Trying to pull repository rhel-base:5000/busyboxwhom ...
Pulling repository rhel-base:5000/busyboxwhom
Error: image busyboxwhom not found
[root@fedora_vm ~]#
Actual results:
Push fails.
Expected results:
Should be able to push image that was deleted from registry.
Additional info:
Comment 5RHEL Program Management
2020-12-15 07:44:47 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.