Bug 1925276

Summary: Double instance create AWS
Product: OpenShift Container Platform Reporter: Michael Gugino <mgugino>
Component: Cloud ComputeAssignee: Alexander Demicev <ademicev>
Cloud Compute sub component: Other Providers QA Contact: sunzhaohua <zhsun>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: ademicev, kahara, mimccune, mjahangi, rh-container
Version: 4.7Flags: jspeed: needinfo-
Target Milestone: ---   
Target Release: 4.9.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1933584 1933586 (view as bug list) Environment:
Last Closed: 2021-10-18 17:29:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1974680    

Description Michael Gugino 2021-02-04 18:14:26 UTC
Creating this bug for tracking as original bug is labeled as private.

We've seen this issue before on another platform.  I'm not entirely sure that the cache is the problem in this particular case, though it may be a contributing factor.  I think what is happening is that when we patch the object (in this case, the status object for the 'phase') that is queuing up another reconcile.  This reconcile contains otherwise stale data about the machine.  This compounds with the fact that AWS's API is eventually consistent; since the AWS API isn't up to date yet, we search by tags and get no instance.  If the machine-object wasn't stale, we'd look up the instance-id directly and requeue with an error if we didn't find the instance in question: https://github.com/openshift/cluster-api-provider-aws/blob/master/pkg/actuators/machine/reconciler.go#L242

This generally only happens when you scale up a single machineset by one.  If there are 2 or more machines being created at once, that seems to be enough time for the cache to catch back up.  The work around until a patch is shipped is to keep an eye on pending/unapproved CSRs.  Since a machine can only have one associated node, the extra instance will not be able to automatically join the cluster.  Unfortunately, that instance will need to be deleted via the cloud provider directly (eg, ec2 web console or CLI).  As you observed, if you delete the machine that is associated with such an instance, both instances will be cleaned up.

Comment 1 Michael Gugino 2021-02-04 18:15:59 UTC
*** Bug 1920770 has been marked as a duplicate of this bug. ***

Comment 2 Joel Speed 2021-02-09 17:28:48 UTC
This will now need to target 4.8 as we have past code freeze

Comment 7 Selim Jahangir 2021-05-26 22:54:05 UTC
Can we get a update on this BZ ?

Comment 8 Michael McCune 2021-06-01 17:01:17 UTC
hi Selim,

looks like we have 2 pull requests open to address this bz. one has merged but the other needs another review and a rebase before we can merge it. it seems like we should be able to merge this for the upcoming 4.8 release.

Comment 10 Joel Speed 2021-06-22 09:59:53 UTC
Note, half of this is already merged to 4.8, half is merged to 4.9. We will need to backport the second half to 4.8.

Comment 11 sunzhaohua 2021-07-01 08:30:35 UTC
clusterversion: 4.9.0-0.nightly-2021-06-30-235246
Seems this bug is hard to reproduce, I tried to scale up 1 replica at a time, but couldn't reproduce. 
From the log I can see phase is set before creation, move to verified.

I0701 07:36:58.347651       1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine
I0701 07:36:58.364264       1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine
I0701 07:36:58.364364       1 actuator.go:104] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: actuator checking if machine exists
I0701 07:36:58.419813       1 reconciler.go:265] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: Instance does not exist
I0701 07:36:58.419843       1 controller.go:357] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: setting phase to Provisioning and requeuing
I0701 07:36:58.419851       1 controller.go:482] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: going into phase "Provisioning"
I0701 07:36:58.433566       1 controller.go:174] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: reconciling Machine
I0701 07:36:58.433674       1 actuator.go:104] miyadav-01jul-bkfgg-worker-us-east-2a-b8g94: actuator checking if machine exists
I0701 07:36:58.462316       1 controller.go:59] controllers/MachineSet "msg"="Reconciling" "machineset"="miyadav-01jul-bkfgg-worker-us-east-2a" "namespace"="openshift-m

Comment 17 errata-xmlrpc 2021-10-18 17:29:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.