Creating this bug for tracking as original bug is labeled as private. We've seen this issue before on another platform. I'm not entirely sure that the cache is the problem in this particular case, though it may be a contributing factor. I think what is happening is that when we patch the object (in this case, the status object for the 'phase') that is queuing up another reconcile. This reconcile contains otherwise stale data about the machine. This compounds with the fact that AWS's API is eventually consistent; since the AWS API isn't up to date yet, we search by tags and get no instance. If the machine-object wasn't stale, we'd look up the instance-id directly and requeue with an error if we didn't find the instance in question: https://github.com/openshift/cluster-api-provider-aws/blob/master/pkg/actuators/machine/reconciler.go#L242 This generally only happens when you scale up a single machineset by one. If there are 2 or more machines being created at once, that seems to be enough time for the cache to catch back up. The work around until a patch is shipped is to keep an eye on pending/unapproved CSRs. Since a machine can only have one associated node, the extra instance will not be able to automatically join the cluster. Unfortunately, that instance will need to be deleted via the cloud provider directly (eg, ec2 web console or CLI). As you observed, if you delete the machine that is associated with such an instance, both instances will be cleaned up.
*** Bug 1920770 has been marked as a duplicate of this bug. ***
This will now need to target 4.8 as we have past code freeze
Hi Can we get a update on this BZ ? Regards selim
hi Selim, looks like we have 2 pull requests open to address this bz. one has merged but the other needs another review and a rebase before we can merge it. it seems like we should be able to merge this for the upcoming 4.8 release.
Note, half of this is already merged to 4.8, half is merged to 4.9. We will need to backport the second half to 4.8.
verified clusterversion: 4.9.0-0.nightly-2021-06-30-235246 Seems this bug is hard to reproduce, I tried to scale up 1 replica at a time, but couldn't reproduce. From the log I can see phase is set before creation, move to verified. I0701 07:36:58.347651 1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine I0701 07:36:58.364264 1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine I0701 07:36:58.364364 1 actuator.go:104] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: actuator checking if machine exists I0701 07:36:58.419813 1 reconciler.go:265] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: Instance does not exist I0701 07:36:58.419843 1 controller.go:357] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: setting phase to Provisioning and requeuing I0701 07:36:58.419851 1 controller.go:482] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: going into phase "Provisioning" I0701 07:36:58.433566 1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine I0701 07:36:58.433674 1 actuator.go:104] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: actuator checking if machine exists I0701 07:36:58.462316 1 controller.go:59] controllers/MachineSet "msg"="Reconciling" "machineset"="miyadav-01jul-bkfgg-worker-us-east-2a" "namespace"="openshift-m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759