Description of problem: Log Tail cannot show error log when build failed Version-Release number of selected component (if applicable): openshift v3.7.0-0.125.0 kubernetes v1.7.0+695f48a16f etcd 3.2.1 How reproducible: Always Steps to Reproduce: 1.Create S2I build using incorrect assemble script $ oc new-app --image-stream=openshift/ruby:2.2 https://github.com/openshift-qe/ruby-hello-world\#invalidassemble 2.Wait build fails, check build status $ oc describe build Actual results: $oc describe build Status: Failed (Assemble script failed.) Started: Fri, 08 Sep 2017 07:15:40 UTC Duration: 15s FetchInputs: 0s Build Config: ruby-hello-world Build Pod: ruby-hello-world-1-build Strategy: Source URL: https://github.com/openshift-qe/ruby-hello-world Ref: invalidassemble Commit: dbfeb32 (Create assemble) Author/Committer: Wenjing Zheng / GitHub Build trigger cause: Image change Image Name/Kind: ruby:2.2 / ImageStreamTag Log Tail: Error on reading termination message from logs: failed to ...ab7-fa163ea9d7b7/sti-build_0.log: no such file or directory Events: Expected results: Log Tail should show error log when build failed Additional info:
That message comes from the k8s pod which is supposed to have filled in the termination log message on the pod itself (which it did, but instead of filling it in w/ the actual logs, it filled it in w/ the error it got trying to get the logs). To confirm this, you can run "oc get pod <build-pod-name> -o yaml" and inspect the termination message defined there. You'll see the same text as appears in the build's LogSnippet. I think this is a dupe, but I'll let Seth point out which bug it's a dupe of since that one is probably assigned to him already.
Joel, PTAL. Seems easy enough to reproduce and thus to track down.
Ben, are you aware of a duplicate? I don't see one.
I'm reasonably sure we've had a report of this before either as a github issue or a bug, but I guess i'm not certain. I guess it was this: https://github.com/openshift/origin/issues/15878 Not sure if Andrew ever opened the upstream issue or not.
I have reproduced this, and it appers that with the following docker option, the feature is broken as described: --log-driver=journald Without that docker option, the feature works as expected. I'm investigating now.
Upstream issue filed: https://github.com/kubernetes/kubernetes/issues/52502 Upstream fix posted: https://github.com/kubernetes/kubernetes/pull/52503 Assuming all is well, we'll pick from upstream back to origin and it should make 3.7.
This has finally merged into Origin and should be in 3.7. I'll add a note of an exact version once we have builds with it in. https://github.com/openshift/origin/pull/16912
Verified openshift v3.7.0-0.189.0 kubernetes v1.7.6+a08f5eeb62
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188