Bug 1372065
| Summary: | Some applications (e.g. EAP) still pushing unchanged layers when building multiple namespaces | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Mike Fiedler <mifiedle> |
| Component: | Image Registry | Assignee: | Michal Minar <miminar> |
| Status: | CLOSED ERRATA | QA Contact: | Mike Fiedler <mifiedle> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.3.0 | CC: | aos-bugs, mfojtik, mifiedle, miminar, tdawson, vlaad, wsun |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | docker-1.12.3-1.el7 | Doc Type: | Bug Fix |
| Doc Text: |
Cause: Prior docker version used to check only one layer digest for existence in remote repository before falling back to full blob upload. However, each layer can have multiple digests associated depending on docker version used to push image to a source registry.
Consequence: During an image push, docker daemon could have picked up wrong layer digest associated to particular image layer which did not existed in remote repository. It would then fall back to full blob upload (even though the daemon knew another digest existing in the remote repository).
Fix: The docker daemon now sorts candidate layer digests by their similarity with the remote repository and iterates over few of them before falling back to full blob re-upload.
Result: Docker pushes should be faster when layers already exist in remote registry.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-01-18 12:53:06 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Mike Fiedler
2016-08-31 20:01:46 UTC
I'll try to reproduce. I have no idea why the described behavior happens just for eap64-basic-s2i, and not for the others. Thanks for the report Mike! So the problem here is that registry.access.redhat.com/jboss-eap-6/eap64-openshift was pushed by some older docker version. The uploaded blobs stored there have different digests than those uploaded with docker 1.10 to the internal registry.
Take a look at docker's internal metadata:
$ cat v2metadata-by-diffid/sha256/821a26b9542a1eb5c575a9f5095cb5c652a5f05b14b17d77e661ae1df7003a2b | json_reformat
[
{
"Digest": "sha256:821a26b9542a1eb5c575a9f5095cb5c652a5f05b14b17d77e661ae1df7003a2b",
"SourceRepository": "registry.access.redhat.com/jboss-eap-6/eap64-openshift"
},
{
"Digest": "sha256:42ee9072ed9116e9ea4b4a4c5d67db98237f9bb3582a31a5bb45586f2388075e",
"SourceRepository": "172.30.241.183:5000/eap1/eap-app"
},
{
"Digest": "sha256:42ee9072ed9116e9ea4b4a4c5d67db98237f9bb3582a31a5bb45586f2388075e",
"SourceRepository": "172.30.241.183:5000/eap2/eap-app"
}
]
The `sha256:821a26b9542a1eb5c575a9f5095cb5c652a5f05b14b17d77e661ae1df7003a2b` is a digest of blob pulled from registry.access.redhat.com/jboss-eap-6/eap64-openshift. The same blob uploaded to internal registry using docker 1.10.3 has different digest `sha256:42ee9072ed9116e9ea4b4a4c5d67db98237f9bb3582a31a5bb45586f2388075e`. The difference is caused either because of changes in tar-split library or perhaps because of different building host environment. The version of docker used to build the image is a main factor.
What happens during a blob upload is that daemon checks for the presence of a digest in the first entry of above json in the internal registry. And obviously fails because the digest exists only in registry.access.redhat.com/jboss-eap-6/eap64-openshift.
So my previous Docker patch is flawed. I'm working on enhanced version that I'll submit upstream.
Here's an upstream PR: https://github.com/docker/docker/pull/26564 Upstream PR has been merged. It is being back-ported to our 1.10 and 1.12 releases: - https://github.com/projectatomic/docker/pull/202 - https://github.com/projectatomic/docker/pull/203 Merged in docker-1.12.2 branch. *** Bug 1347022 has been marked as a duplicate of this bug. *** Can we close this one? Is there a red hat docker build in our channels with the fix included? extras-rhel-7.3-candidate or brew build https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=520605 Mike, is there a chance we can get this verified by using the the brew build docker? RHEL 7.3 (and the corresponding extras) only has docker-1.10.3-57.el7 so I don't think it right to close this, since our users are still experiencing it. Verified the issue is fixed using docker 1.12.3-4 from the extras repo on mirror.openshift.com docker-1.12.3 should be released by the time that openshift 3.4 is released. Marking this as part of the 3.4 release and moving it through that errata process. I accidentally switched this to VERIFIED. Moving back to ON_QA. Hi Mike, Could you help check if this bug has been fixed? If it has been fixed,help verify it,thanks! Verified on Docker 1.12.3-8 and OCP 3.4.0.33 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066 |