Description of problem: Build Config timed out waiting for condition 400: Bad Request Version-Release number of selected component (if applicable): OCP 4.6 How reproducible: 100% Steps to Reproduce: 1. Create a build config with the follwoing spec: spec: failedBuildsHistoryLimit: 3 nodeSelector: null output: to: kind: ImageStreamTag name: test-configmap-input-is:0.1.0 postCommit: {} resources: {} runPolicy: Parallel source: binary: {} configMaps: - configMap: name: yum-satellite-repos destinationDir: yum.repos.d type: Binary strategy: dockerStrategy: dockerfilePath: Dockerfile forcePull: true type: Docker successfulBuildsHistoryLimit: 3 triggers: - type: ConfigChange 2. Ensure the configmap yum-satellite-repos, is not present in the namespace. 3. # oc start-build <build_name> --from-dir=. --wait --follow - Actual results: Uploading finished I0623 11:40:21.079482 821384 helpers.go:216] server response object: [{ "metadata": {}, "status": "Failure", "message": "unable to wait for build test-configmap-input-bc-4 to run: timed out waiting for the condition", "reason": "BadRequest", "code": 400 }] Expected results: The build should have prompted the unavailable resources instead of waiting for timeout condition. Additional info:
The build timed out because the pod created for the build remains in a perpetual "PodInitializing" state, waiting for the referenced ConfigMap to appear. Unfortunately there is nothing directly in the pod status or information returned from `oc` that makes it clear the build can't run because it references a ConfigMap that doesn't exist yet. The same thing occurs if a BuildConfig references a Secret that doesn't exist. The better user experience in this situation is for the build to fail immediately, with a reason and message indicating that the referenced ConfigMap (or Secret) does not exist. Users will then be able to take corrective action. This can be reproduced with a BuildConfig that references a ConfigMap or Secret that doesn't exist and uses Binary source.
*** Bug 1978627 has been marked as a duplicate of this bug. ***
VERIFIED =========== jsingh@fugaku /usr/local/go/src/github.com/openshift/ruby-hello-world master ± oc new-project build-test Now using project "build-test" on server "https://api.ci-ln-yqvddv2-f76d1.origin-ci-int-gce.dev.openshift.com:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-example to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname jsingh@fugaku /usr/local/go/src/github.com/openshift/ruby-hello-world master ± oc get bc NAME TYPE FROM LATEST example Docker Binary 1 jsingh@fugaku /usr/local/go/src/github.com/openshift/ruby-hello-world master ± oc start-build example --from-dir=. --wait --follow Uploading directory "." as binary input for the build ... Uploading finished Error from server (BadRequest): build example-2 failed: InvalidOutputReference: Output image could not be resolved. ====================================================================================================================== kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: example namespace: build-test uid: f5df1936-8de4-4973-8b6e-eb8b4412fb8b resourceVersion: '33654' generation: 3 creationTimestamp: '2021-09-30T12:05:38Z' managedFields: - manager: Mozilla operation: Update apiVersion: build.openshift.io/v1 time: '2021-09-30T12:05:38Z' fieldsType: FieldsV1 fieldsV1: 'f:spec': 'f:failedBuildsHistoryLimit': {} 'f:output': 'f:to': .: {} 'f:kind': {} 'f:name': {} 'f:runPolicy': {} 'f:source': 'f:binary': {} 'f:configMaps': {} 'f:type': {} 'f:strategy': 'f:dockerStrategy': .: {} 'f:dockerfilePath': {} 'f:forcePull': {} 'f:type': {} 'f:successfulBuildsHistoryLimit': {} 'f:triggers': {} - manager: openshift-apiserver operation: Update apiVersion: build.openshift.io/v1 time: '2021-09-30T12:05:38Z' fieldsType: FieldsV1 fieldsV1: 'f:status': 'f:lastVersion': {} spec: nodeSelector: null output: to: kind: ImageStreamTag name: 'test-configmap-input-is:0.1.0' resources: {} successfulBuildsHistoryLimit: 3 failedBuildsHistoryLimit: 3 strategy: type: Docker dockerStrategy: forcePull: true dockerfilePath: Dockerfile postCommit: {} source: type: Binary binary: {} configMaps: - configMap: name: yum-satellite-repos destinationDir: yum.repos.d triggers: - type: ConfigChange runPolicy: Parallel status: lastVersion: 2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days