Bug 1925276 - Double instance create AWS
Summary: Double instance create AWS
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.9.0
Assignee: Alexander Demicev
QA Contact: sunzhaohua
URL:
Whiteboard:
: 1920770 (view as bug list)
Depends On:
Blocks: 1974680
TreeView+ depends on / blocked
 
Reported: 2021-02-04 18:14 UTC by Michael Gugino
Modified: 2021-10-18 17:29 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1933584 1933586 (view as bug list)
Environment:
Last Closed: 2021-10-18 17:29:20 UTC
Target Upstream Version:
Embargoed:
jspeed: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-api-provider-aws pull 406 0 None open Bug 1925276: Fix eventual consistency logic to be consistent 2021-04-29 14:51:31 UTC
Github openshift machine-api-operator pull 857 0 None closed Bug 1925276: Make sure phase is always set before creation 2021-06-07 14:13:52 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:29:39 UTC

Description Michael Gugino 2021-02-04 18:14:26 UTC
Creating this bug for tracking as original bug is labeled as private.

We've seen this issue before on another platform.  I'm not entirely sure that the cache is the problem in this particular case, though it may be a contributing factor.  I think what is happening is that when we patch the object (in this case, the status object for the 'phase') that is queuing up another reconcile.  This reconcile contains otherwise stale data about the machine.  This compounds with the fact that AWS's API is eventually consistent; since the AWS API isn't up to date yet, we search by tags and get no instance.  If the machine-object wasn't stale, we'd look up the instance-id directly and requeue with an error if we didn't find the instance in question: https://github.com/openshift/cluster-api-provider-aws/blob/master/pkg/actuators/machine/reconciler.go#L242

This generally only happens when you scale up a single machineset by one.  If there are 2 or more machines being created at once, that seems to be enough time for the cache to catch back up.  The work around until a patch is shipped is to keep an eye on pending/unapproved CSRs.  Since a machine can only have one associated node, the extra instance will not be able to automatically join the cluster.  Unfortunately, that instance will need to be deleted via the cloud provider directly (eg, ec2 web console or CLI).  As you observed, if you delete the machine that is associated with such an instance, both instances will be cleaned up.

Comment 1 Michael Gugino 2021-02-04 18:15:59 UTC
*** Bug 1920770 has been marked as a duplicate of this bug. ***

Comment 2 Joel Speed 2021-02-09 17:28:48 UTC
This will now need to target 4.8 as we have past code freeze

Comment 7 Selim Jahangir 2021-05-26 22:54:05 UTC
Hi
Can we get a update on this BZ ?
Regards
selim

Comment 8 Michael McCune 2021-06-01 17:01:17 UTC
hi Selim,

looks like we have 2 pull requests open to address this bz. one has merged but the other needs another review and a rebase before we can merge it. it seems like we should be able to merge this for the upcoming 4.8 release.

Comment 10 Joel Speed 2021-06-22 09:59:53 UTC
Note, half of this is already merged to 4.8, half is merged to 4.9. We will need to backport the second half to 4.8.

Comment 11 sunzhaohua 2021-07-01 08:30:35 UTC
verified 
clusterversion: 4.9.0-0.nightly-2021-06-30-235246
Seems this bug is hard to reproduce, I tried to scale up 1 replica at a time, but couldn't reproduce. 
From the log I can see phase is set before creation, move to verified.

I0701 07:36:58.347651       1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine
I0701 07:36:58.364264       1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine
I0701 07:36:58.364364       1 actuator.go:104] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: actuator checking if machine exists
I0701 07:36:58.419813       1 reconciler.go:265] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: Instance does not exist
I0701 07:36:58.419843       1 controller.go:357] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: setting phase to Provisioning and requeuing
I0701 07:36:58.419851       1 controller.go:482] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: going into phase "Provisioning"
I0701 07:36:58.433566       1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine
I0701 07:36:58.433674       1 actuator.go:104] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: actuator checking if machine exists
I0701 07:36:58.462316       1 controller.go:59] controllers/MachineSet "msg"="Reconciling" "machineset"="miyadav-01jul-bkfgg-worker-us-east-2a" "namespace"="openshift-m

Comment 17 errata-xmlrpc 2021-10-18 17:29:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.