Description of problem: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-openstack-serial-4.2/4 [Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously [Suite:openshift/conformance/serial] Stdout: STEP: scaling "ci-op-hvrvq4nj-xjh5s-worker" from 3 to 4 replicas Sep 6 00:59:04.685: INFO: >>> kubeConfig: /tmp/admin.kubeconfig STEP: checking scaled up worker nodes are ready Sep 6 01:06:00.406: INFO: Error getting nodes from machineSet: not all machines have a node reference: map[ci-op-hvrvq4nj-xjh5s-worker-dfznr:ci-op-hvrvq4nj-xjh5s-worker-dfznr ci-op-hvrvq4nj-xjh5s-worker-dxczt:ci-op-hvrvq4nj-xjh5s-worker-dxczt ci-op-hvrvq4nj-xjh5s-worker-gv5jc:ci-op-hvrvq4nj-xjh5s-worker-gv5jc] Sep 6 01:06:04.932: INFO: Running AfterSuite actions on all nodes Sep 6 01:06:04.932: INFO: Running AfterSuite actions on node 1 fail [github.com/openshift/origin/test/extended/machines/scale.go:201]: Timed out after 420.000s. Expected <bool>: false to be true Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Emilio proposed a patch to increase the timeout to 10 minutes, hopefully making this test green on OpenStack platform. https://github.com/openshift/origin/pull/23762
Verified on 4.2.0-0.nightly-2019-09-16-012427
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922