+++ This bug was initially created as a clone of Bug #1925276 +++ Creating this bug for tracking as original bug is labeled as private. We've seen this issue before on another platform. I'm not entirely sure that the cache is the problem in this particular case, though it may be a contributing factor. I think what is happening is that when we patch the object (in this case, the status object for the 'phase') that is queuing up another reconcile. This reconcile contains otherwise stale data about the machine. This compounds with the fact that AWS's API is eventually consistent; since the AWS API isn't up to date yet, we search by tags and get no instance. If the machine-object wasn't stale, we'd look up the instance-id directly and requeue with an error if we didn't find the instance in question: https://github.com/openshift/cluster-api-provider-aws/blob/master/pkg/actuators/machine/reconciler.go#L242 This generally only happens when you scale up a single machineset by one. If there are 2 or more machines being created at once, that seems to be enough time for the cache to catch back up. The work around until a patch is shipped is to keep an eye on pending/unapproved CSRs. Since a machine can only have one associated node, the extra instance will not be able to automatically join the cluster. Unfortunately, that instance will need to be deleted via the cloud provider directly (eg, ec2 web console or CLI). As you observed, if you delete the machine that is associated with such an instance, both instances will be cleaned up. --- Additional comment from Michael Gugino on 2021-02-05 03:15:59 JST --- --- Additional comment from Joel Speed on 2021-02-10 02:28:48 JST --- This will now need to target 4.8 as we have past code freeze --- Additional comment from Selim Jahangir on 2021-02-16 08:07:19 JST --- Hi Will the fix in this BZ be backported to ocp4.6 and ocp4.7? Regards Selim
*** Bug 1996961 has been marked as a duplicate of this bug. ***
verified clusterversion: Validated on : 4.7.0-0.nightly-2021-09-17-223502 Scaled machineset by 1 below logs confirm status getting set before creation I0918 04:21:26.856276 1 controller.go:58] controllers/MachineSet "msg"="Reconciling" "machineset"="zhsun47-5ttxj-worker-us-east-2c" "namespace"="openshift-machine-api" I0918 04:21:26.897827 1 controller.go:168] zhsun47-5ttxj-worker-us-east-2c-dsfvm: reconciling Machine I0918 04:21:26.907775 1 controller.go:168] zhsun47-5ttxj-worker-us-east-2c-dsfvm: reconciling Machine I0918 04:21:26.907795 1 actuator.go:104] zhsun47-5ttxj-worker-us-east-2c-dsfvm: actuator checking if machine exists I0918 04:21:26.980496 1 reconciler.go:256] zhsun47-5ttxj-worker-us-east-2c-dsfvm: Instance does not exist I0918 04:21:26.980518 1 controller.go:426] zhsun47-5ttxj-worker-us-east-2c-dsfvm: going into phase "Provisioning" I0918 04:21:26.989961 1 controller.go:312] zhsun47-5ttxj-worker-us-east-2c-dsfvm: reconciling machine triggers idempotent create I0918 04:21:26.989994 1 actuator.go:78] zhsun47-5ttxj-worker-us-east-2c-dsfvm: actuator creating machine I0918 04:21:26.991031 1 reconciler.go:41] zhsun47-5ttxj-worker-us-east-2c-dsfvm: creating machine E0918 04:21:26.991046 1 reconciler.go:266] NodeRef not found in machine zhsun47-5ttxj-worker-us-east-2c-dsfvm I0918 04:21:27.008848 1 controller.go:58] controllers/MachineSet "msg"="Reconciling" "machineset"="zhsun47-5ttxj-worker-us-east-2c" "namespace"="openshift-machine-api" I091
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.7.31 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3510