Bug 1749239 - [ci] [vsphere] Flaky test: will orphan all RCs and adopt them back when recreated
Summary: [ci] [vsphere] Flaky test: will orphan all RCs and adopt them back when recre...
Keywords:
Status: CLOSED DUPLICATE of bug 1761689
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.4.0
Assignee: Tomáš Nožička
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-05 08:10 UTC by Junqi Zhao
Modified: 2019-11-11 15:39 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-11 15:39:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Junqi Zhao 2019-09-05 08:10:29 UTC
Description of problem:
https://storage.googleapis.com/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-vsphere-upi-4.2/47/build-log.txt


failed: (2m7s) 2019-09-04T22:56:15 "[Feature:DeploymentConfig] deploymentconfigs adoption [Conformance] will orphan all RCs and adopt them back when recreated [Suite:openshift/conformance/parallel/minimal]"


Flaky tests:

[Feature:DeploymentConfig] deploymentconfigs adoption [Conformance] will orphan all RCs and adopt them back when recreated [Suite:openshift/conformance/parallel/minimal]
[Feature:Image][triggers] Image change build triggers TestSimpleImageChangeBuildTriggerFromImageStreamTagSTI [Suite:openshift/conformance/parallel]
[Feature:Image][triggers] Image change build triggers TestSimpleImageChangeBuildTriggerFromImageStreamTagSTIWithConfigChange [Suite:openshift/conformance/parallel]


Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-09-04-215255

How reproducible:
some times

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Tomáš Nožička 2019-09-06 14:54:20 UTC
where are the artifacts for that run? I can't navigate from the link you put there.


RC deployment-simple-2 has only 2 of 3 replicas available

pod deployment-simple-2-5b89l was created at "2019-09-04T22:54:38Z" and started at "2019-09-04T22:54:41Z" ready at "2019-09-04T22:54:51Z"

pod deployment-simple-2-txtb8 was created at "2019-09-04T22:55:02Z" and started at "2019-09-04T22:55:05Z" ready at "2019-09-04T22:55:15Z"

pod deployment-simple-2-x8q4q was created at "2019-09-04T22:55:02Z" and started at "2019-09-04T22:55:05Z" ready at "2019-09-04T22:55:07Z"

it saw the scaling event at
INFO: At 2019-09-04 22:55:02 +0000 UTC - event for deployment-simple: {deploymentconfig-controller } ReplicationControllerScaled: Scaled replication controller "deployment-simple-2" from 1 to 3

and was already gathering data at Sep  4 22:55:32.566 but it must have not checked since "2019-09-04T22:55:15Z" when the last pod became ready.

timing is close for where the previous RC was deleted and the new one created. I think I saw something similar reported for RS (which shares almost the same code) in upstream.

controller logs would be helpful though

Comment 3 Tomáš Nožička 2019-11-06 16:12:16 UTC
I think this is broken RS, fix in https://github.com/kubernetes/kubernetes/pull/82572

Comment 4 Tomáš Nožička 2019-11-11 15:39:56 UTC

*** This bug has been marked as a duplicate of bug 1761689 ***


Note You need to log in before you can comment on or make changes to this bug.