Bug 1870469

Summary: Pipeline Run shows random Pod log messages if workspace is missing
Product: OpenShift Container Platform Reporter: Mohammed Saud <msaud>
Component: Dev ConsoleAssignee: Mohammed Saud <msaud>
Status: CLOSED ERRATA QA Contact: Karthik Jeeyar <kjeeyar>
Severity: high Docs Contact:
Priority: high    
Version: 4.6CC: aos-bugs, kjeeyar, nmukherj
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-27 16:29:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mohammed Saud 2020-08-20 07:40:31 UTC
Description of problem:

When setting up a Pipeline manually the Pipeline Run failed (with the steps described below) which results in an unexpected Pipeline Run Status. When checking the logs for the Pipeline Run a "random" Pod log was shown to the user.
Prerequisites (if any, like setup, operators/versions):

    Install OpenShift Pipelines operator (tested with 1.0.1)

Steps to Reproduce

    Create a project / namespace
    Create at least one (or better more) other running pods (ex. from image jerolimov/nodeinfo)
    Create a pipeline (Pipeline > Create)
    1. Select Task "git-clone"
    2. Select Task "git-clone" and enter a Git url (ex. https://github.com/jerolimov/docker)
    3. Press create
    Run the new Pipeline and checkout the logs
    1. Select Action dropdown > Start
    2. Select the Logs tab
    You now see log messages from other pods!

Actual results:

    You now see log messages from other pods!

There was the following error messages part of the Pipeline run:

    Pipeline run message: TaskRun new-pipeline-6a827s-git-clone-bv7jp has failed
    TaskRun message: bound workspaces did not match declared workspaces: didn't provide required values: [output]

Expected results:

    Do not show an invalid pod log. Do not request Pods without a name filter.
    Show the correct log message, or at least an error message:
    Show the PipelineRun status condition message, or better the taskRun status condition message!

Reproducibility (Always/Intermittent/Only Once):

Always
Build Details:
Additional info:

Select your Pipeline Ruin Details and open the YAML.

Under "taskRuns xxx status" you can see that the "podName" is an empty (""). First analyse with Andrew Ballantyne shows that this is used to select "the first pod within your namespace" because the name filter was not applied.

See also packages/dev-console/src/components/pipelineruns/detail-page-tabs/PipelineRunLogs.tsx

Comment 3 Karthik Jeeyar 2020-09-23 10:57:31 UTC
Verification Details: 
Build: 4.6.0-0.nightly-2020-09-22-130743
url: https://console-openshift-console.apps.dev-svc-4.6-092307.devcluster.openshift.com/k8s/ns/rhd-test/tekton.dev~v1beta1~PipelineRun/nodejs-ex-mru0wv/logs
user: kubeadmin

Comment 5 errata-xmlrpc 2020-10-27 16:29:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196