+++ This bug was initially created as a clone of Bug #1466583 +++ Description of problem: Create app and set imagechange trigger with automatic, when image changes will trigger twice deploys. Version-Release number of selected component (if applicable): openshift v3.7.0-0.147.1 kubernetes v1.7.6+a08f5eeb62 etcd 3.2.1 How reproducible: always Steps to Reproduce: 1.Login in OpenShift and create project; 2.Create app : `oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git` 3.Trigger new build to make the image changes, check the DC. Actual results: 3.When image changes will trigger the deployment twice. [root@qe-yinzhou36-master-1 ~]# oc describe dc/ruby-ex Name: ruby-ex Namespace: testzy Created: 4 minutes ago Labels: app=ruby-ex Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 4 Selector: app=ruby-ex,deploymentconfig=ruby-ex Replicas: 1 Triggers: Config, Image(ruby-ex@latest, auto=true) Strategy: Rolling Template: Pod Template: Labels: app=ruby-ex deploymentconfig=ruby-ex Annotations: openshift.io/generated-by=OpenShiftNewApp Containers: ruby-ex: Image: docker-registry.default.svc:5000/testzy/ruby-ex@sha256:d07573ca263e7492b47b33789c69be975da55d077f08690decce52d68a1a7631 Port: 8080/TCP Environment: <none> Mounts: <none> Volumes: <none> Deployment #4 (latest): Name: ruby-ex-4 Created: 29 seconds ago Status: Complete Replicas: 1 current / 1 desired Selector: app=ruby-ex,deployment=ruby-ex-4,deploymentconfig=ruby-ex Labels: app=ruby-ex,openshift.io/deployment-config.name=ruby-ex Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Deployment #3: Created: 34 seconds ago Status: Failed Replicas: 0 current / 0 desired Deployment #2: Created: 2 minutes ago Status: Complete Replicas: 0 current / 0 desired Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2m 2m 10 deploymentconfig-controller Normal DeploymentAwaitingCancellation Deployment of version 2 awaiting cancellation of older running deployments 2m 2m 1 deployer-controller Normal RolloutCancelled ruby-ex-1: Rollout for "testzy/ruby-ex-1" cancelled 2m 2m 1 deploymentconfig-controller Normal DeploymentCreated Created new replication controller "ruby-ex-2" for version 2 34s 34s 1 deploymentconfig-controller Normal DeploymentCreated Created new replication controller "ruby-ex-3" for version 3 33s 33s 1 deploymentconfig-controller Normal DeploymentCancelled Cancelled deployment "ruby-ex-3" superceded by version 4 33s 30s 10 deploymentconfig-controller Normal DeploymentAwaitingCancellation Deployment of version 4 awaiting cancellation of older running deployments 29s 29s 1 deploymentconfig-controller Normal DeploymentCreated Created new replication controller "ruby-ex-4" for version 4 29s 29s 1 deployer-controller Normal RolloutCancelled ruby-ex-3: Rollout for "testzy/ruby-ex-3" cancelled [root@qe-yinzhou36-master-1 ~]# oc get is ruby-ex -o yaml apiVersion: v1 ...... status: dockerImageRepository: docker-registry.default.svc:5000/testzy/ruby-ex tags: - items: - created: 2017-06-30T02:34:02Z dockerImageReference: docker-registry.default.svc:5000/testzy/ruby-ex@sha256:d07573ca263e7492b47b33789c69be975da55d077f08690decce52d68a1a7631 generation: 1 image: sha256:d07573ca263e7492b47b33789c69be975da55d077f08690decce52d68a1a7631 - created: 2017-06-30T02:32:11Z dockerImageReference: docker-registry.default.svc:5000/testzy/ruby-ex@sha256:26beea1f9a4f4b4752ea2e07aaba0906b17f9f1d744ddcf813458b0f45c65c45 generation: 1 image: sha256:26beea1f9a4f4b4752ea2e07aaba0906b17f9f1d744ddcf813458b0f45c65c45 tag: latest Expected results: 3.Once image changes only trigger the deployment once. Additional info: --- Additional comment from Michal Fojtik on 2017-06-30 06:14:45 EDT --- Steps 2 and 3 are confusing... The new-app should trigger a new build automatically. Are you triggering an extra build? That might result in the second deployment. --- Additional comment from Michal Fojtik on 2017-06-30 06:16:53 EDT --- If my comment is not valid and you don't trigger an extra build, can you please provide the master log when this happens? We have similar issue in GitHub that is causing new-app test to flake hard. This looks very similar to that but we are not able to reproduce this. This only happens on GCE for us. --- Additional comment from zhou ying on 2017-06-30 06:37 EDT --- --- Additional comment from Michal Fojtik on 2017-06-30 11:21:43 EDT --- The fix should be https://github.com/openshift/origin/pull/14882 We also need to update Ansible playbook and cut release to get this changes. --- Additional comment from Michal Fojtik on 2017-07-03 04:34:10 EDT --- This is basically a duplicate of #1463499
Why we have this clone?
The original bug was duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1463499 which was verified. Do you see this behavior in 3.7.0 (iow. we regressed?)
We could reproduced with version v3.7.0-0.147.1, but for version openshift v3.7.0-0.155.0 can't reproduce again, so weird, will set it to low.
I have reproduced this on OCP 3.7.14 running in the CDK 3.3.0-1. The problem is that the first deployment gets cancelled automatically, and the second has problems pulling the image: Failed to pull image "orders:latest": rpc error: code = 2 desc = unauthorized: authentication required If I cancel the second deployment and trigger one manually, the deployment completes with no problem. This makes the use of the fabric8 maven plugin very difficult, as manual intervention is required.