Bug 1418990 - Number of replicas and pods mismatch
Summary: Number of replicas and pods mismatch
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OKD
Classification: Red Hat
Component: Deployments
Version: 3.x
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Michal Fojtik
QA Contact: zhou ying
URL:
Whiteboard: online_3.4.1
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-03 11:28 UTC by Eduard Trott
Modified: 2017-02-09 09:47 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-09 09:47:09 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Eduard Trott 2017-02-03 11:28:57 UTC
Description of problem:
Number of replicas and pods mismatch

Version-Release number of selected component (if applicable):
$ oc version
oc v3.4.1.2
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://console.dev-preview-stg.openshift.com:443
openshift v3.4.1.2
kubernetes v1.4.0+776c994


How reproducible:
always

Steps to Reproduce:
1. Create a project
2. oc new-app openshift/perl:5.16 --code=https://github.com/openshift/sti-perl -l app\=test-perl --context-dir=5.16/test/sample-test-app/ --name=myapp
3. oc scale replicationcontrollers myapp-1 --replicas=2
4. oc describe replicationcontrollers myapp-1


Actual results:
      Name:		myapp-1
      Namespace:	9xtby
      Image(s):	172.30.46.234:5000/9xtby/myapp@sha256:dda14896ad87c6585adedb557a5e1555c9e188e113285c1f62acb4eac035d82b
      Selector:	app=test-perl,deployment=myapp-1,deploymentconfig=myapp
      Labels:		app=test-perl
      		openshift.io/deployment-config.name=myapp
      Replicas:	1 current / 1 desired
      Pods Status:	2 Running / 0 Waiting / 0 Succeeded / 0 Failed
      No volumes.
      Events:
        FirstSeen	LastSeen	Count	From				SubobjectPath	Type		Reason			Message
        ---------	--------	-----	----				-------------	--------	------			-------
        1m		1m		1	{replication-controller }			Normal		SuccessfulCreate	Created pod: myapp-1-9aw0h
        55s		55s		1	{replication-controller }			Normal		SuccessfulCreate	Created pod: myapp-1-zwbks
        55s		55s		1	{replication-controller }			Normal		SuccessfulDelete	Deleted pod: myapp-1-zwbks


Expected results:
--//--
Replicas:	2 current / 2 desired
Pods Status:	2 Running / 0 Waiting / 0 Succeeded / 0 Failed
--//--

Additional info:

Comment 1 Weihua Meng 2017-02-04 14:01:56 UTC
replicationcontroller myapp-1 is controlled by deploymentconfig myapp.

in step 3, RC replicas was set to 2, then DC set it back to 2, result in a pod is created and then deleted.

# oc describe dc myapp
<---snip--->
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason				Message
  ---------	--------	-----	----				-------------	--------	------				-------
  8m		8m		1	{deploymentconfig-controller }			Normal		DeploymentCreated		Created new replication controller "myapp-1" for version 1
  2m		2m		1	{deploymentconfig-controller }			Normal		ReplicationControllerScaled	Scaled replication controller "myapp-1" from 2 to 1

if we do this
# oc scale dc/myapp --replicas=2
we got
# oc describe rc/myapp-1
<---snip--->
Replicas:	2 current / 2 desired
Pods Status:	1 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  8m		8m		1	{replication-controller }			Normal		SuccessfulCreate	Created pod: myapp-1-m4lhw
  1m		1m		1	{replication-controller }			Normal		SuccessfulCreate	Created pod: myapp-1-uceuq
  1m		1m		1	{replication-controller }			Normal		SuccessfulDelete	Deleted pod: myapp-1-uceuq
  7s		7s		1	{replication-controller }			Normal		SuccessfulCreate	Created pod: myapp-1-oxfqa

# oc describe dc/myapp
<---snip--->
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason				Message
  ---------	--------	-----	----				-------------	--------	------				-------
  8m		8m		1	{deploymentconfig-controller }			Normal		DeploymentCreated		Created new replication controller "myapp-1" for version 1
  2m		2m		1	{deploymentconfig-controller }			Normal		ReplicationControllerScaled	Scaled replication controller "myapp-1" from 2 to 1
  20s		20s		1	{deploymentconfig-controller }			Normal		ReplicationControllerScaled	Scaled replication controller "myapp-1" from 1 to 2

Comment 2 Michal Fojtik 2017-02-06 17:33:31 UTC
Are we ok closing this bug as not a bug? You should never scale RC manually if it is controlled by DC as you pointed out.

The describe output you provided when you scaled DC seems OK (1 running, 1 waiting = 2 pods in total matching 2 replicas).

Comment 4 Abhishek Gupta 2017-02-08 19:22:36 UTC
Moving this bug to origin to decide if we want to fix this or close as won't-fix.


Note You need to log in before you can comment on or make changes to this bug.