Description of problem: Number of replicas and pods mismatch Version-Release number of selected component (if applicable): $ oc version oc v3.4.1.2 kubernetes v1.4.0+776c994 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://console.dev-preview-stg.openshift.com:443 openshift v3.4.1.2 kubernetes v1.4.0+776c994 How reproducible: always Steps to Reproduce: 1. Create a project 2. oc new-app openshift/perl:5.16 --code=https://github.com/openshift/sti-perl -l app\=test-perl --context-dir=5.16/test/sample-test-app/ --name=myapp 3. oc scale replicationcontrollers myapp-1 --replicas=2 4. oc describe replicationcontrollers myapp-1 Actual results: Name: myapp-1 Namespace: 9xtby Image(s): 172.30.46.234:5000/9xtby/myapp@sha256:dda14896ad87c6585adedb557a5e1555c9e188e113285c1f62acb4eac035d82b Selector: app=test-perl,deployment=myapp-1,deploymentconfig=myapp Labels: app=test-perl openshift.io/deployment-config.name=myapp Replicas: 1 current / 1 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {replication-controller } Normal SuccessfulCreate Created pod: myapp-1-9aw0h 55s 55s 1 {replication-controller } Normal SuccessfulCreate Created pod: myapp-1-zwbks 55s 55s 1 {replication-controller } Normal SuccessfulDelete Deleted pod: myapp-1-zwbks Expected results: --//-- Replicas: 2 current / 2 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed --//-- Additional info:
replicationcontroller myapp-1 is controlled by deploymentconfig myapp. in step 3, RC replicas was set to 2, then DC set it back to 2, result in a pod is created and then deleted. # oc describe dc myapp <---snip---> Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 8m 8m 1 {deploymentconfig-controller } Normal DeploymentCreated Created new replication controller "myapp-1" for version 1 2m 2m 1 {deploymentconfig-controller } Normal ReplicationControllerScaled Scaled replication controller "myapp-1" from 2 to 1 if we do this # oc scale dc/myapp --replicas=2 we got # oc describe rc/myapp-1 <---snip---> Replicas: 2 current / 2 desired Pods Status: 1 Running / 1 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 8m 8m 1 {replication-controller } Normal SuccessfulCreate Created pod: myapp-1-m4lhw 1m 1m 1 {replication-controller } Normal SuccessfulCreate Created pod: myapp-1-uceuq 1m 1m 1 {replication-controller } Normal SuccessfulDelete Deleted pod: myapp-1-uceuq 7s 7s 1 {replication-controller } Normal SuccessfulCreate Created pod: myapp-1-oxfqa # oc describe dc/myapp <---snip---> Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 8m 8m 1 {deploymentconfig-controller } Normal DeploymentCreated Created new replication controller "myapp-1" for version 1 2m 2m 1 {deploymentconfig-controller } Normal ReplicationControllerScaled Scaled replication controller "myapp-1" from 2 to 1 20s 20s 1 {deploymentconfig-controller } Normal ReplicationControllerScaled Scaled replication controller "myapp-1" from 1 to 2
Are we ok closing this bug as not a bug? You should never scale RC manually if it is controlled by DC as you pointed out. The describe output you provided when you scaled DC seems OK (1 running, 1 waiting = 2 pods in total matching 2 replicas).
Moving this bug to origin to decide if we want to fix this or close as won't-fix.