Bug 1933584 - [4.6.z] Double instance create AWS
Summary: [4.6.z] Double instance create AWS
Keywords:
Status: CLOSED EOL
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.6.z
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.z
Assignee: Alexander Demicev
QA Contact: sunzhaohua
URL:
Whiteboard:
Depends On: 1933586
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-01 07:41 UTC by Masaki Furuta ( RH )
Modified: 2023-09-15 01:02 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1925276
Environment:
Last Closed: 2021-10-14 09:36:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Masaki Furuta ( RH ) 2021-03-01 07:41:17 UTC
+++ This bug was initially created as a clone of Bug #1925276 +++

Creating this bug for tracking as original bug is labeled as private.

We've seen this issue before on another platform.  I'm not entirely sure that the cache is the problem in this particular case, though it may be a contributing factor.  I think what is happening is that when we patch the object (in this case, the status object for the 'phase') that is queuing up another reconcile.  This reconcile contains otherwise stale data about the machine.  This compounds with the fact that AWS's API is eventually consistent; since the AWS API isn't up to date yet, we search by tags and get no instance.  If the machine-object wasn't stale, we'd look up the instance-id directly and requeue with an error if we didn't find the instance in question: https://github.com/openshift/cluster-api-provider-aws/blob/master/pkg/actuators/machine/reconciler.go#L242

This generally only happens when you scale up a single machineset by one.  If there are 2 or more machines being created at once, that seems to be enough time for the cache to catch back up.  The work around until a patch is shipped is to keep an eye on pending/unapproved CSRs.  Since a machine can only have one associated node, the extra instance will not be able to automatically join the cluster.  Unfortunately, that instance will need to be deleted via the cloud provider directly (eg, ec2 web console or CLI).  As you observed, if you delete the machine that is associated with such an instance, both instances will be cleaned up.

--- Additional comment from Michael Gugino on 2021-02-05 03:15:59 JST ---



--- Additional comment from Joel Speed on 2021-02-10 02:28:48 JST ---

This will now need to target 4.8 as we have past code freeze

--- Additional comment from Selim Jahangir on 2021-02-16 08:07:19 JST ---

Hi
Will the fix in this BZ be backported to ocp4.6 and ocp4.7?

Regards
Selim

Comment 4 Joel Speed 2021-05-19 14:26:47 UTC
Just updating this to note that we won't be able to backport until this has been backported through and released in 4.7, nothing much we can do here right now

Comment 12 Red Hat Bugzilla 2023-09-15 01:02:27 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.