Description of problem:
Wrong build steps when use an external image as a “stage” in multi-stage build.
Here is my Dockerfile https://github.com/xiuwang/multi-stage-builds/blob/externalimage/Dockerfile
Version-Release number of selected component (if applicable):
$oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.0.0-0.nightly-2019-02-28-054829 True False 3h4m Cluster version is 4.0.0-0.nightly-2019-02-28-054829
Steps to Reproduce:
1.Create a multi-stage build with using an external image as a “stage”
oc new-build https://github.com/xiuwang/multi-stage-builds#externalimage
2.Check build logs
Step 2: Step 1 should be 'FROM golang:1.9 AS builder', there is no step 'FROM nginx:latest' in my Dockerfile.
STEP 1: FROM nginx:latest
STEP 2: FROM golang:1.9 AS builder
STEP 3: WORKDIR /tmp
STEP 4: COPY . .
STEP 5: RUN echo foo > /tmp/bar
Should remove 'FROM nginx:latest' step in build log.
1.The Dockerfile works for docker cmd 'docker build ./Dockerfile'
2.If add step “COPY --from=nginx:latest /etc/nginx/nginx.conf /tmp/bar” serval times in Dockerfile, the build step 'FROM nginx:latest' show several times in logs
STEP 1: FROM nginx:latest
STEP 2: FROM nginx:latest
STEP 3: FROM golang:1.9 AS builder
STEP 4: WORKDIR /tmp
STEP 5: COPY . .
STEP 6: RUN echo foo > /tmp/bar
To avoid the confusion I faced, this is the dockerfile in question:
I assume that buildah is injecting the "FROM nginx:latest" message as a result of this:
(and it probably has to, in order to be able to copy from that image... it is slightly confusing that it prints it, but it's probably correct).
Moving this to the container runtimes team since they own buildah.
(I would agree that buildah should not "FROM" the image multiple times, if the same image is referenced multiple times, that seems like a legitimate bug).
(In reply to Ben Parees from comment #1)
> To avoid the confusion I faced, this is the dockerfile in question:
> I assume that buildah is injecting the "FROM nginx:latest" message as a
> result of this:
> (and it probably has to, in order to be able to copy from that image... it
> is slightly confusing that it prints it, but it's probably correct).
I think you're correct about what's happening - internally we're prepending FROMs so that we'll have a container root filesystem, based on the specified image, that we can use instead of the build context's root.
Any chance this is fixed in buildah 1.7?
This one might be fixed, let me see if I can test/verify.
No luck on the prior fix, it still is a problem. I'm going to guess it's somewhere in the imagebuilder stageing prep code.
Forgot to add, Docker does not resort the order like we're apparently doing.
IDK why, but something wonky is going on due to this line in the Dockerfile it appears:
COPY --from=nginx:latest /etc/nginx/nginx.conf /tmp/bar
I should have caught this earlier, apologies on not doing so. This functionality is due to this PR: https://github.com/containers/buildah/pull/1181. Prior to this PR, any COPY commands with a '--from' directive were failing unless the image specified in the --from directive had been pulled previously. The changes created in this PR first goes through the Dockerfile looking for any "COPY --from" statements and if found, prepends a FROM directive to the Dockerfile in memory and then the Dockerfile is run. That way we ensure that the image is present by the time the 'COPY --from' is encountered.
This does cause the output of the 'podman build' or 'buildah bud' commands to differ from 'docker build' output. I believe that the resulting container is equivalent though.
We could possibly move the pre-pended FROM statement to live just above the 'COPY --from' statement in the internal Dockerfile, but I think we still would have a difference in the output between Docker and Buildah/Podman. I'm not sure we can get around that entirely, at least not easily.
Nalin might have other thoughts, but won't be checking in until early next week.
I've got some more changes for our multistage logic coming soon, and I think this can be sorted (by pulling the named image if it isn't found in local storage at the time we encounter the COPY instruction) after that effort. The current set of changes that it would build on aren't expected for this next release, though, so I think we need to defer this a bit.
Okay, this should be fixed in the buildah library since https://github.com/containers/buildah/pull/1489 was merged. We'll need to update openshift/builder to include it or a newer version in order for the changes to take effect for OpenShift builds.
Can't reproduce this issue with 4.1.0-0.nightly-2019-05-14-202907 payload. Could we mark this to on_qa, then I can verify it?
Here is build logs:
STEP 1: FROM centos:7 AS test
STEP 2: USER 1001
STEP 3: FROM centos:7
STEP 4: COPY --from=test /usr/bin/curl /test/
STEP 5: COPY --from=busybox:latest /bin/echo /test/
STEP 6: COPY --from=busybox:latest /bin/ping /test/
STEP 7: ENV "OPENSHIFT_BUILD_NAME"="multi-stage" "OPENSHIFT_BUILD_NAMESPACE"="e2e-test-build-multistage-7w2ck"
Here is my build:
FROM scratch as test
COPY --from=test /usr/bin/curl /test/
COPY --from=busybox:latest /bin/echo /test/
COPY --from=busybox:latest /bin/ping /test/
As comment #13, move this bug to verified.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.