Bug 1749239

Summary: [ci] [vsphere] Flaky test: will orphan all RCs and adopt them back when recreated
Product: OpenShift Container Platform Reporter: Junqi Zhao <juzhao>
Component: kube-controller-managerAssignee: Tomáš Nožička <tnozicka>
Status: CLOSED DUPLICATE QA Contact: zhou ying <yinzhou>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.2.0CC: aos-bugs, mfojtik
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-11-11 15:39:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Junqi Zhao 2019-09-05 08:10:29 UTC
Description of problem:
https://storage.googleapis.com/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-vsphere-upi-4.2/47/build-log.txt


failed: (2m7s) 2019-09-04T22:56:15 "[Feature:DeploymentConfig] deploymentconfigs adoption [Conformance] will orphan all RCs and adopt them back when recreated [Suite:openshift/conformance/parallel/minimal]"


Flaky tests:

[Feature:DeploymentConfig] deploymentconfigs adoption [Conformance] will orphan all RCs and adopt them back when recreated [Suite:openshift/conformance/parallel/minimal]
[Feature:Image][triggers] Image change build triggers TestSimpleImageChangeBuildTriggerFromImageStreamTagSTI [Suite:openshift/conformance/parallel]
[Feature:Image][triggers] Image change build triggers TestSimpleImageChangeBuildTriggerFromImageStreamTagSTIWithConfigChange [Suite:openshift/conformance/parallel]


Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-09-04-215255

How reproducible:
some times

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Tomáš Nožička 2019-09-06 14:54:20 UTC
where are the artifacts for that run? I can't navigate from the link you put there.


RC deployment-simple-2 has only 2 of 3 replicas available

pod deployment-simple-2-5b89l was created at "2019-09-04T22:54:38Z" and started at "2019-09-04T22:54:41Z" ready at "2019-09-04T22:54:51Z"

pod deployment-simple-2-txtb8 was created at "2019-09-04T22:55:02Z" and started at "2019-09-04T22:55:05Z" ready at "2019-09-04T22:55:15Z"

pod deployment-simple-2-x8q4q was created at "2019-09-04T22:55:02Z" and started at "2019-09-04T22:55:05Z" ready at "2019-09-04T22:55:07Z"

it saw the scaling event at
INFO: At 2019-09-04 22:55:02 +0000 UTC - event for deployment-simple: {deploymentconfig-controller } ReplicationControllerScaled: Scaled replication controller "deployment-simple-2" from 1 to 3

and was already gathering data at Sep  4 22:55:32.566 but it must have not checked since "2019-09-04T22:55:15Z" when the last pod became ready.

timing is close for where the previous RC was deleted and the new one created. I think I saw something similar reported for RS (which shares almost the same code) in upstream.

controller logs would be helpful though

Comment 3 Tomáš Nožička 2019-11-06 16:12:16 UTC
I think this is broken RS, fix in https://github.com/kubernetes/kubernetes/pull/82572

Comment 4 Tomáš Nožička 2019-11-11 15:39:56 UTC

*** This bug has been marked as a duplicate of bug 1761689 ***