Bug 1788253 - [Feature:DeploymentConfig] deploymentconfigs adoption [Conformance] will orphan all RCs and adopt them back when recreated
Summary: [Feature:DeploymentConfig] deploymentconfigs adoption [Conformance] will orph...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: openshift-controller-manager
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.4.0
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard: workloads
Depends On:
Blocks: 1795629
TreeView+ depends on / blocked
 
Reported: 2020-01-06 20:14 UTC by Ben Parees
Modified: 2020-05-04 11:23 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1795629 (view as bug list)
Environment:
Last Closed: 2020-05-04 11:22:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 24366 0 None closed Bug 1788253: fix deployment config flake by using informer 2020-04-29 09:46:01 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:23:09 UTC

Description Ben Parees 2020-01-06 20:14:10 UTC
Description of problem:
[Feature:DeploymentConfig] deploymentconfigs adoption [Conformance] will orphan all RCs and adopt them back when recreated [Suite:openshift/conformance/parallel/minimal] expand_less 	2m41s
fail [github.com/openshift/origin/test/extended/deployments/deployments.go:1671]: Unexpected error:
    <*errors.errorString | 0xc00028e1d0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

from https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.3/763


How reproducible:
high frequency flake as seen in:
https://testgrid.k8s.io/redhat-openshift-ocp-release-4.3-informing#release-openshift-ocp-installer-e2e-azure-4.3

Comment 1 Michal Fojtik 2020-01-07 12:37:23 UTC
The real failure seems to be:

Jan  6 17:18:44.760: INFO: unable to fetch logs for pods: [deployment-simple-1-p6s72[e2e-test-cli-deployment-c5t4g].container[myapp].error=the server rejected our request for an unknown reason (get pods deployment-simple-1-p6s72), deployment-simple-2-hgq9k[e2e-test-cli-deployment-c5t4g].container[myapp].error=the server rejected our request for an unknown reason (get pods deployment-simple-2-hgq9k)]

Comment 2 Maciej Szulik 2020-01-07 13:03:08 UTC
From looking at the events between 18:18:00 and 18:37:33 the cluster was experiencing difficulties connecting to Azure, which might be related.

18:18:00 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-92c19592-cd77-4b98-91d5-a930d56126de is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus2-9glj5."
18:19:33 (1) "openshift-ingress" SyncLoadBalancerFailed Error syncing load balancer: failed to ensure load balancer: azure - cloud provider rate limited(read) for operation:PublicIPGet
18:19:33 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-b93eb3c3-32df-406c-b20e-11f0249de295 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus1-jtv4x."
18:19:55 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-366c971f-0c30-4cb7-bba8-8df5a0495645 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus3-7pp6w."
18:20:50 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-cd3ff25c-6144-4821-81de-cd9765bef35c is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus2-9glj5."
18:21:10 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-1f56efc4-831c-496d-8790-8d67e39a4437 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus1-jtv4x."
18:22:37 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-5b730ddf-ff64-4571-835f-4363b7eb2431 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus2-9glj5."
18:23:20 (1) "openshift-sdn" Unhealthy Liveness probe failed: ovs-ofctl: br0: failed to connect to socket (Broken pipe)

18:24:06 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-71476c30-6e22-4c23-bbef-311e7acaf747 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus1-jtv4x."
18:24:34 (2) "openshift-ingress" SyncLoadBalancerFailed Error syncing load balancer: failed to ensure load balancer: azure - cloud provider rate limited(read) for operation:NSGGet
18:24:37 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-653171d4-5400-45f9-9de8-6da533623cb0 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus1-jtv4x."
18:24:46 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-1d71487d-b002-4fe8-a75b-c81855030df0 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus3-7pp6w."
18:26:17 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-c422041b-7007-42e8-89b6-eda6ecd8ca54 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus1-jtv4x."
18:28:46 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-a8438adb-f130-4045-b2e5-f1313e5bdaff is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus2-9glj5."
18:28:58 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-fc769998-167b-4531-8791-027557e3f2fe is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus2-9glj5."
18:29:34 (2) "openshift-ingress" SyncLoadBalancerFailed Error syncing load balancer: failed to ensure load balancer: azure - cloud provider rate limited(read) for operation:NSGGet
18:30:20 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-c56e6222-08f1-4595-8748-e0555458fbf8 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus3-7pp6w."
18:30:56 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-3514de1e-780a-4e20-bf17-e5230554d573 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus3-7pp6w."
18:31:41 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-d36c8cf3-b534-49b3-8f4b-52f77c833231 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus2-9glj5."
18:33:17 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-b6550c4f-7c5e-408c-80c2-1cf746fc3714 is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus2-9glj5."
18:35:51 (1) "" VolumeFailedDelete compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk ci-op-cy32wifd-cbd67-ccwd7-dynamic-pvc-ad584883-d92e-4d2f-bbc8-654f43607c5e is attached to VM /subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-cy32wifd-cbd67-ccwd7-rg/providers/Microsoft.Compute/virtualMachines/ci-op-cy32wifd-cbd67-ccwd7-worker-centralus1-jtv4x."
18:37:33 (2) "" RecyclerPod Recycler pod: Successfully assigned default/recycler-for-nfs-8vhfm to ci-op-cy32wifd-cbd67-ccwd7-worker-centralus1-jtv4x

Comment 3 Michal Fojtik 2020-01-07 13:10:49 UTC
Let's try https://github.com/openshift/origin/pull/24366 to address the API server instability in AWS, however there is other wait loop in that test that we might hit.

Comment 4 Michal Fojtik 2020-01-07 13:25:52 UTC
I fixed both loops in the test, however the logs indicate this might be caused by being rate-limited by Azure API which can result in kubelet lagging making pods ready which can then cause this test to fail as the replicas are not what we expect in 5 minute window...

Comment 5 Tomáš Nožička 2020-01-07 13:47:56 UTC
Jan 6 17:18:07.914: INFO: wait: LatestVersion: 2 STEP: making sure DC can be scaled [AfterEach] adoption [Conformance]
Jan 6 17:18:38.221: INFO: Running 'oc --namespace=e2e-test-cli-deployment-c5t4g --config=/tmp/configfile598927012 get dc/deployment-simple -o yaml'

The apiserver or a LB before his droped the WATCH after 30s. That means we have an apiserver or LB issue, which we can mask in the test but preferably should fix.

Comment 7 Tomáš Nožička 2020-01-08 11:20:50 UTC
also seen on aws-shared-vpc - again 30 sec  (the timeout in the code is 5 minutes)

Dec 20 15:11:40.572: INFO: wait: LatestVersion: 2
STEP: making sure DC can be scaled
[AfterEach] adoption [Conformance]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1560
Dec 20 15:12:10.830: INFO: Running 'oc --namespace=e2e-test-cli-deployment-tcshv --config=/tmp/configfile013024021 get dc/deployment-simple -o yaml'

https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-shared-vpc-4.3/310#1:build-log.txt%3A2430

Comment 9 wewang 2020-01-13 03:22:00 UTC
The pr only fixed in master; in 4.4 and 4.3 version issues still exist, should cloned+cherry-picked to 4.3 and 4.4.
version:
4.4.0-0.nightly-2020-01-13-010304

4.4 job: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.4/377
4.3 job: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.3/786

Comment 10 Maciej Szulik 2020-01-28 11:20:43 UTC
The issue was fixed in 4.4, since that's the current version we're working on. I don't think we'll be backporting this to 4.3. 
Moving back to qa.

Comment 12 wewang 2020-02-03 03:11:49 UTC
@xxia Could you help workloads team to verify it, since Whiteboard is workload, thanks.

Comment 13 Martin André 2020-02-03 15:21:00 UTC
CI doesn't seem to agree about this bug being fixed. It's still a frequent flake on 4.4 branch:

https://ci-search-ci-search-next.svc.ci.openshift.org/?search=failed%3A+.*will+orphan+all+RCs+and+adopt+them+back+when+recreated&maxAge=336h&context=2&type=all

Comment 15 Maciej Szulik 2020-02-24 12:16:23 UTC
From what I checked all of the instances from search are pointing to passed tests, with that I'm moving this back to qa so we can also merged the 4.3 fix.

Comment 17 Ben Bennett 2020-02-25 20:03:05 UTC
This happens in 4.1 too... should this get backported at least to 4.2?
  https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-4.1/1380

Comment 19 Maciej Szulik 2020-02-26 12:01:46 UTC
Those instances does cause the job to fail, so I'm moving this back to qa, if we won't pass 4.4 we won't be able to merge 4.3 fix 
and we'll get stuck if ppl keep opening 4.4 bug for other versions. I don't think we'll merge 4.2, but I might be wrong, I'd like
to start with at least getting the fix in 4.4 and 4.3.

Comment 21 zhou ying 2020-03-02 02:02:44 UTC
Ok , RF https://bugzilla.redhat.com/show_bug.cgi?id=1788253#c19, checked other instance like : https://testgrid.k8s.io/redhat-openshift-ocp-release-4.4-informing#release-openshift-ocp-installer-e2e-aws-4.4, can't reproduce the issue . will verify .

Comment 22 W. Trevor King 2020-03-17 20:56:02 UTC
PR landed back in January [1], before the 4.4/4.5 fork, and yet I still see 4.4 and 4.5 CI failing with these symptoms.  For example [2,3].

Mar 16 19:00:25.139: INFO: wait: LatestVersion: 2
STEP: making sure DC can be scaled
[AfterEach] adoption
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
Mar 16 19:00:55.197: INFO: Running 'oc --namespace=e2e-test-cli-deployment-6hd86 --kubeconfig=/tmp/configfile536210205 get dc/deployment-simple -o yaml'

Is that the same 30s timeout mentioned in comment 5?  Is there a separate bug tracking the "apiserver or LB issue" you mentioned?  The test itself is failing (or at least flaking) in 11% of 4.4/4.5 release promotion jobs over the past 24h [4].  And it case it helps distinguish the 30s issue from other issues that also kill this test:

$ curl -s 'https://search.svc.ci.openshift.org/search?name=^release-openshift-ocp-.*4.%5b45%5d$&search=failed:.*deploymentconfigs+adoption.*will+orphan+all+RCs+and+adopt+them+back+when+recreated&search=making+sure+DC+can+be+scaled&context=3&type=build-log' | jq -r '. | to_entries[] | select((.value | length) == 2) | .context = .value["making sure DC can be scaled"][0].context | .key + "\n  " + (.context | join("\n  "))'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.5/421
  Mar 16 19:00:24.657: INFO: wait: LatestVersion: 1
  Mar 16 19:00:24.903: INFO: wait: LatestVersion: 2
  Mar 16 19:00:25.139: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 16 19:00:55.197: INFO: Running 'oc --namespace=e2e-test-cli-deployment-6hd86 --kubeconfig=/tmp/configfile536210205 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.5/466
  Mar 17 11:56:49.707: INFO: wait: LatestVersion: 1
  Mar 17 11:56:50.073: INFO: wait: LatestVersion: 2
  Mar 17 11:56:50.306: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 17 11:57:20.343: INFO: Running 'oc --namespace=e2e-test-cli-deployment-57v9h --kubeconfig=/tmp/configfile109441358 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-fips-4.5/248
  Mar 17 14:44:05.739: INFO: wait: LatestVersion: 1
  Mar 17 14:44:05.965: INFO: wait: LatestVersion: 2
  Mar 17 14:44:06.194: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 17 14:44:36.284: INFO: Running 'oc --namespace=e2e-test-cli-deployment-zb9cs --kubeconfig=/tmp/configfile537349007 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-ovn-4.5/213
  Mar 15 12:30:16.913: INFO: wait: LatestVersion: 1
  Mar 15 12:30:17.138: INFO: wait: LatestVersion: 2
  Mar 15 12:30:17.364: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 15 12:30:47.467: INFO: Running 'oc --namespace=e2e-test-cli-deployment-qv4sz --kubeconfig=/tmp/configfile452291367 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-ovn-4.5/224
  Mar 16 17:13:57.882: INFO: wait: LatestVersion: 1
  Mar 16 17:13:58.156: INFO: wait: LatestVersion: 2
  Mar 16 17:13:58.381: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 16 17:14:28.496: INFO: Running 'oc --namespace=e2e-test-cli-deployment-vhdst --kubeconfig=/tmp/configfile792942284 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-upi-4.4/1318
  Mar 17 08:42:05.478: INFO: wait: LatestVersion: 1
  Mar 17 08:42:05.705: INFO: wait: LatestVersion: 2
  Mar 17 08:42:05.929: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption [Conformance]
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 17 08:42:35.981: INFO: Running 'oc --namespace=e2e-test-cli-deployment-wgt65 --kubeconfig=/tmp/configfile432279539 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.4/1317
  Mar 15 05:05:20.878: INFO: wait: LatestVersion: 1
  Mar 15 05:05:21.282: INFO: wait: LatestVersion: 2
  Mar 15 05:05:21.499: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption [Conformance]
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 15 05:05:51.560: INFO: Running 'oc --namespace=e2e-test-cli-deployment-qc6gw --kubeconfig=/tmp/configfile183933056 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.4/1319
  Mar 15 22:42:29.072: INFO: wait: LatestVersion: 1
  Mar 15 22:42:29.458: INFO: wait: LatestVersion: 2
  Mar 15 22:42:29.693: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption [Conformance]
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 15 22:42:59.771: INFO: Running 'oc --namespace=e2e-test-cli-deployment-9lg9n --kubeconfig=/tmp/configfile559169363 get dc/deployment-simple -o yaml'
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.5/201
  Mar 14 14:35:17.563: INFO: wait: LatestVersion: 1
  Mar 14 14:35:17.782: INFO: wait: LatestVersion: 2
  Mar 14 14:35:18.006: INFO: wait: LatestVersion: 2
  STEP: making sure DC can be scaled
  [AfterEach] adoption [Conformance]
    /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1566
  Mar 14 14:35:48.079: INFO: Running 'oc --namespace=e2e-test-cli-deployment-jjbgx --kubeconfig=/tmp/configfile798307131 get dc/deployment-simple -o yaml'
...


[1]: https://github.com/openshift/origin/pull/24366#event-2933167123
[2]: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.5/421
[3]: https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.5/421/build-log.txt
[4]: https://search.svc.ci.openshift.org/chart?name=^release-openshift-ocp-.*4.%5b45%5d$&search=failed:.*deploymentconfigs%20adoption.*will%20orphan%20all%20RCs%20and%20adopt%20them%20back%20when%20recreated

Comment 23 Ben Parees 2020-03-18 00:21:14 UTC
Trevor, i think the best thing to do is open a new bug and reference this bug, since this one is on its way to being closed and it's not clear if it should be reopened or not.  The owning team can decide if they want to reopen this and close yours as a dupe, or let this one close and then chase yours.

Sounds like the new bug should be urgent or high.

Comment 24 W. Trevor King 2020-03-18 03:56:21 UTC
Spun out the new/continued occurrences into bug 1814498.

Comment 26 errata-xmlrpc 2020-05-04 11:22:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.