Description of problem:
Following the procedure here: https://docs.openshift.com/container-platform/4.5/backup_and_restore/replacing-unhealthy-etcd-member.html#restore-replace-stopped-etcd-member_replacing-unhealthy-etcd-member
I wanted to remove an etcd member from the cluster to test out some things. I followed the command to remove the etcd member. That completed as expected. A short time later, that etcd member was re-added without user intervention. (corresponding user scenario: etcd is having problems due to over utilization and I need to replace this master. etcd might be crash ATM, or might not be, but will be restarted in a few minutes)
I created a new master machine, it joined the etcd cluster automatically (as I found out later)
Aftwards, I deleted the corresponding machine via the machine-api. That worked as expected.
Later, I deleted the new master machine via the machine-api (corresponding user scenario: I attempted to use a bigger instance, but I decided to go even bigger).
Unbeknownst to me at this time, etcd currently has 4 members. 3/4 are healthy. When I delete the newest master, I now have 2/4 healthy and quorum is lost.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Remove etcd member from healthy master following product docs. This simulates a member that might have been unhealthy and then became healthy (eg, temporary network condition or some other issue)
2. Verify that etcd member is re-added to quorum even though user removed it.
3. Join new master to cluster
4. Verify there are now 4/4 etcd members
5. Delete original master where we removed etcd member via machine-api
6. Verify there are 3/4 healthy etcd members
7. Pretend I don't have enough quota to create an additional master before I delete the one I just created
8. Forget to remove etcd member, or don't forget (TBD).
9. Delete the new master machine via the machine-api
10. API becomes unavailable due to quorum loss.
API unavailable due to quorum loss.
1. Never have 4 etcd members.
2. When an admin removes an etcd member via the established procedure, it never adds itself back.
3. etcd-quorum-guard is only useful if it has the same number of desired replicas as etcd members.
Now, once could argue that there are alerts around this kind of thing, I'm unsure what alerts may have been firing at the time as I did this pretty quickly from the terminal. While alerts are certainly useful, expecting users to refer to current alarms before running a particular set of commands is not great (I certainly failed to do so). In a number of scenarios, if I'm deleting/adding master machines, there's probably alerts going off the entire time, so they're not likely to be high signal/noise during this process.
I’m adding UpcomingSprint, because I was occupied by fixing bugs with higher priority/severity, developing new features with higher priority, or developing new features to improve stability at a macro level. I will revisit this bug next sprint.
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
The LifecycleStale keyword was removed because the needinfo? flag was reset.
The bug assignee was notified.
In order to manage scaling correctly, we need a way to conclude that the member has been removed from the cluster. We are able to read the wal logs during init and conclude if we (our member id) have been removed from the cluster. If we observe this condition we need to remove the old etcd state. We are not going to be able to get to this in 4.7 time frame but it should be a prereq for 4.9 scaling epics.
Another option is checking member list but we still must ensure the cluster id of the etcd we are asking membership of is as expected. Otherwise, we could remove etcd state based on observations of the wrong cluster.
*** Bug 1892413 has been marked as a duplicate of this bug. ***
PR to add verification step of exactly 3 etcd members: https://github.com/openshift/openshift-docs/pull/32579
Andrea, LGTM, thanks, and I can't comment in github because there is problem in my Two-factor authentication recent days, I only have review right.
No worries, thanks @Ge Liu!
Created an RFE for a future enhancement for etcd-operator to avoid readding a recently deleted member.
PR has been merged; moving to RELEASE_PENDING.
Updates are live: https://docs.openshift.com/container-platform/4.7/backup_and_restore/replacing-unhealthy-etcd-member.html#restore-replace-stopped-etcd-member_replacing-unhealthy-etcd-member