+++ This bug was initially created as a clone of Bug #1843876 +++ +++ This bug was initially created as a clone of Bug #1843462 +++ +++ This bug was initially created as a clone of Bug #1843187 +++ When pod expectations are not met, status for workloads can wedge. When status for workloads wedges, operators wait indefinitely. When operators wait indefinitely status is wrong. When status is wrong, upgrades can fail. Picking https://github.com/kubernetes/kubernetes/pull/91008 seems like a fix. --- Additional comment from Maciej Szulik on 2020-06-03 12:54:58 CEST ---
This waiting to be merged in the queue.
Confirmed with payload: 4.3.0-0.nightly-2020-07-12-052232 , this issue has fixed: Delete one pod at the first terminal , at the same time scale down the deploy, no new pod created . [zhouying@dhcp-140-138 ~]$ oc get po NAME READY STATUS RESTARTS AGE mydeploy-6cb778bf69-6jngn 1/1 Running 0 5m7s mydeploy-6cb778bf69-ftl6v 1/1 Running 0 5m7s mydeploy-6cb778bf69-sgkwc 1/1 Running 0 15s [zhouying@dhcp-140-138 ~]$ oc delete po/mydeploy-6cb778bf69-sgkwc pod "mydeploy-6cb778bf69-sgkwc" deleted [zhouying@dhcp-140-138 ~]$ oc get po NAME READY STATUS RESTARTS AGE mydeploy-6cb778bf69-6jngn 1/1 Running 0 5m30s mydeploy-6cb778bf69-ftl6v 1/1 Running 0 5m30s [zhouying@dhcp-140-138 ~]$ oc scale deploy/mydeploy --replicas=2 deployment.extensions/mydeploy scaled [zhouying@dhcp-140-138 ~]$ oc get po NAME READY STATUS RESTARTS AGE mydeploy-6cb778bf69-6jngn 1/1 Running 0 5m20s mydeploy-6cb778bf69-ftl6v 1/1 Running 0 5m20s mydeploy-6cb778bf69-sgkwc 0/1 Terminating 0 28s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.3.31 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3180