Bug 1386018

Summary: [platformmanagement_public_699]Don't show the ReplicationControllerCreateError and NewReplicationControllerCreated reason correctly on DC status
Product: OKD Reporter: zhou ying <yinzhou>
Component: DeploymentsAssignee: Michail Kargakis <mkargaki>
Status: CLOSED CURRENTRELEASE QA Contact: zhou ying <yinzhou>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.xCC: aos-bugs, mkargaki
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-12-09 21:53:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description zhou ying 2016-10-18 01:27:48 UTC
Description of problem:
Description of problem:
Set the replicationcontrollers of quota to 2, the third RC will created failed, check the DC, couldn't show the reason :ReplicationControllerCreateError;
Create new DC, check the status, couldn't show the reason :NewReplicationControllerCreated;
Resume a paused DC, check the status, couldn't show the reason :DeploymentConfigResumed

Version-Release number of selected component (if applicable):
openshift version
openshift v1.4.0-alpha.0+2142bdb
kubernetes v1.4.0+776c994
etcd 3.1.0-alpha.1

How reproducible:
Always

Steps to Reproduce:
1. Start OpenShift and login;
2. Set the quota for the project:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: myquota
spec:
  hard:
    cpu: "30"
    memory: 16Gi
    persistentvolumeclaims: "20"
    pods: "20"
    replicationcontrollers: "2"
    resourcequotas: "1"
    secrets: "15"
    services: "10"

3. Try to create 3 DC, the third will create failed, check the third DC status;
4. Create a new project, and create new DC, check the DC status;
5. Pause the DC, and resume it, check the DC status.

Actual results:
3. The third DC failed, but the status couldn't show the reason:ReplicationControllerCreateError;
4. The DC status reason is "ReplicationControllerUpdated"
5. The DC status reason is "NewReplicationControllerAvailable"


Expected results:
3. Should show the reason : ReplicationControllerCreateError
4. should show the reason:  NewReplicationControllerCreated
5. shold show the reason: DeploymentConfigResumed.

Additional info:

Comment 1 openshift-github-bot 2016-10-22 14:45:12 UTC
Commit pushed to master at https://github.com/openshift/origin

https://github.com/openshift/origin/commit/e343373c034741c067adb346dc1b9864e1bb68bd
Bug 1386018: use deployment conditions when creating a rc

Adds conditions to the deployment config when a replication controller
is created or when an error occurs while trying to create.

Comment 2 zhou ying 2016-10-25 02:24:50 UTC
Confirmed with the latest ami, can't see the reason on DC status:

openshift version
openshift v1.4.0-alpha.0+c94f61a
kubernetes v1.4.0+776c994
etcd 3.1.0-alpha.1

  conditions:
  - lastTransitionTime: 2016-10-25T02:14:06Z
    message: Deployment config does not have minimum availability.
    status: "False"
    type: Available


ationcontrollers "hooks-1" is forbidden: exceeded quota: myquota, requested:
replicationcontrollers=1, used: replicationcontrollers=2, limited: replicationcontrollers=2

Comment 3 Michail Kargakis 2016-10-25 11:48:16 UTC
Not sure why it doesn't work for you. Works fine for me: http://pastebin.com/3XGfR3Qr

$ oc version
oc v1.4.0-alpha.0+50efe7d-787
kubernetes v1.4.0+776c994
features: Basic-Auth

Server https://10.0.2.15:8443
openshift v1.4.0-alpha.0+50efe7d-787
kubernetes v1.4.0+776c994

Comment 4 zhou ying 2016-10-26 09:41:43 UTC
Today I can verify with latest OCP, but on the origin, still can reproduce it. 
And the the reason: DeploymentConfigResumed is blocked by :bug 1388832

A question:
how to create the  reason:  NewReplicationControllerCreated ?

Comment 5 Michail Kargakis 2016-10-28 13:18:32 UTC
https://github.com/openshift/origin/pull/11609 got merged. You should be able to see the NewReplicationControllerCreated reason now. If you want to easily verify it, add a deploy.openshift.io/deployer-pod.ignore: "true" in your DC and try to start a new rollout. The deployer pod will never be created and your DC will stay with the NewReplicationControllerCreated reason.

Comment 6 zhou ying 2016-11-03 03:02:03 UTC
Confirmed with latest ami , the issue has fix. 
openshift version
openshift v1.4.0-alpha.0+90d8c62-1000
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0


  - lastTransitionTime: 2016-11-03T03:00:19Z
    message: 'replicationcontrollers "hello-openshift-1" is forbidden: exceeded quota:
      myquota, requested: replicationcontrollers=1, used: replicationcontrollers=2,
      limited: replicationcontrollers=2'
    reason: ReplicationControllerCreateError


  - lastTransitionTime: 2016-11-03T02:58:40Z
    message: Created new replication controller "database-1" for version 1
    reason: NewReplicationControllerCreated