Description of problem: If a cloud instance backing a machine has stopped, and the machine is reconciled again later for some reason, the stopped instance will be deleted and a new instance will be created in its place. This behavior is undocumented, likely unexpected, and probably something we should remove.
Merged in master.
How to verify QE: Prior to this patch: 1) Stop a worker instance in AWS console. 2) Wait for node to go unready. 3) After node is unready, in a minute or two you should see a new instance provisioned in AWS console with same tag.Name as instance you stopped. 4) Old instance will be terminated. 1) Stop a worker instance in AWS console. 2) Wait for node to go unready. 3) After node is unready, after a few minutes, verify there are no new instances with same tag.Name in AWS console as the instnace you stopped. 4) Instance will not be terminated and can be successfully restarted.
Verified. clusterversion: 4.2.0-0.ci-2019-06-18-001241 Stop a worker instance, there are no new instance with same tag.Name was provisioned. If start the instance in AWS console the node could join the cluster.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922