Description of problem: When a deployment triggered there are more than one deployer pod is created Version-Release number of selected component (if applicable): openshift v3.7.14 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8 How reproducible: additional logs attached Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Created attachment 1382368 [details] master logs
Created attachment 1382387 [details] oc get dc,rc,po,is,build -o yaml
Verified in ocp env: openshift v3.7.28 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.8 # oc process -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/OCP-11384/application-template-stibuild.json| oc create -f - secret "dbsecret" created service "frontend" created route "route-edge" created imagestream "origin-ruby-sample" created imagestream "ruby-22-centos7" created buildconfig "ruby-sample-build" created deploymentconfig "frontend" created service "database" created deploymentconfig "database" created # oc get pods NAME READY STATUS RESTARTS AGE database-1-zz8th 1/1 Running 0 2m frontend-1-bktv6 1/1 Running 0 1m frontend-1-bpcr5 1/1 Running 0 1m ruby-sample-build-1-build 0/1 Completed 0 2m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0636