+++ This bug was initially created as a clone of Bug #1861896 +++
Description of problem:
In 4.5 and below, if the control plane kubelet goes unreachable but pods are still running (eg, someone/something causes the kubelet to stop or otherwise the kubelet is prevented from communicating with the cluster), machine-api pods that were running on that node will be rescheduled to another.
If this happens, essentially you have duplicate machine-api controllers running.
The effect is bad.
[mgugino@mguginop50 4.5-nightly]$ ./oc get machines -A
NAMESPACE NAME PHASE TYPE REGION ZONE AGE
openshift-machine-api mgugino-deva2-pgdsh-worker-us-west-2b-ljzsj Running m5.large us-west-2 us-west-2b 9m12s
openshift-machine-api mgugino-deva2-pgdsh-worker-us-west-2b-r9wv7 Running m5.large us-west-2 us-west-2b 9m12s
(From AWS console)
i-029a2e8f1a6fa7f79 (mgugino-deva2-pgdsh-worker-us-west-2b-ljzsj), i-080add2aa273b8aec (mgugino-deva2-pgdsh-worker-us-west-2b-4pctx), i-057b15daa3fcb3ab8 (mgugino-deva2-pgdsh-worker-us-west-2b-ljzsj), i-022b14a051a7320fe (mgugino-deva2-pgdsh-worker-us-west-2b-r9wv7), i-0c24f513eeeec5212 (mgugino-deva2-pgdsh-worker-us-west-2b-r9wv7)
As you can see, I have 5 instances where I should have two. This is a result of scaling to 2 from 0 after stopping the kubelet on the node where machine-api components are running.
First, the machinesets over provision machines (ended up with 3 machines temporarily instead of 2). Then, each machine controller races to create an instance. So, we can see we have two duplicates and an extra instance from the machine that was immediately terminated (but the machine-controller doing the delete didn't know about the instance the other machine-controller created).
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Identify what node machine-api controllers are running on.
2. Stop kubelet on that host.
3. Wait for several minutes until pods are rescheduled onto another host.
4. Scale up a machineset.
Too many instances and machines created, and machines are leaked.
Extra instances and machines should not be created and leaked.
We need to come up with a plan to make an advisory as there is no way to detect this condition in-cluster.
--- Additional comment from Joel Speed on 2020-07-30 10:27:24 UTC ---
Have you tried this in 4.6? I assume because of the leader election that has been added, this is not a problem from 4.6 onwards?
--- Additional comment from Michael Gugino on 2020-07-30 13:03:16 UTC ---
(In reply to Joel Speed from comment #1)
> Have you tried this in 4.6? I assume because of the leader election that has
> been added, this is not a problem from 4.6 onwards?
I have not tried it in 4.6. I tried it in 4.5 I'm assuming it does not happen in 4.6 due to leader election, but definitely we should verify.
One indication that you may have this problem is excess CSRs being generated. This may or may not happen depending on if the instances boot successfully. If there were any problems with your machinesets/machine that would have caused them to not boot, then there would be no excess CSRs (this sound extremely unlikely as it's an edge case of an edge case).
--- Additional comment from Joel Speed on 2020-08-03 13:45:44 UTC ---
> I have not tried it in 4.6. I tried it in 4.5 I'm assuming it does not happen in 4.6 due to leader election, but definitely we should verify.
I have just verified that this isn't a problem in 4.6.
When disabling kubelet on the master, this does not cause any issue for the running pod and as such it keeps the leader election lease up to date, preventing the secondary controller from starting.
--- Additional comment from Michael Gugino on 2020-08-03 14:23:01 UTC ---
(In reply to Joel Speed from comment #3)
> > I have not tried it in 4.6. I tried it in 4.5 I'm assuming it does not happen in 4.6 due to leader election, but definitely we should verify.
> I have just verified that this isn't a problem in 4.6.
> When disabling kubelet on the master, this does not cause any issue for the
> running pod and as such it keeps the leader election lease up to date,
> preventing the secondary controller from starting.
Thanks for verifying this.
Any good suggestions for making sure we don't regress on the machine-controller in this area? The other components I'm not as worried about, but the machine-controller leaking instances is obviously really bad.
--- Additional comment from Joel Speed on 2020-08-03 15:29:56 UTC ---
This is quite a tough one to test, I can't really think of a way to check that we don't regress that doesn't involve testing the implementation details, ie, is leader election working.
We could write a test that takes the leader election lock and verifies that the running controller restarts (since it's lost its lease), then create a machine and verify that nothing happens because there is no running machine controller (it being blocked from starting by the test holding the lease)
i am working on backporting the leader election changes to 4.5.
i have been backporting changes to address this issue on 4.5. the changes are currently on hold while we work to get a few final patches in place.
also need to figure out the proper way to catalog this in bugzilla as the tooling is not recognizing the current pull requests.
i think we are just waiting on patches to merge, all the backports have been proposed.