Bug 1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff
Summary: "Replacing an unhealthy etcd member whose node is not ready" procedure result...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Etcd
Version: 4.8
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.8.0
Assignee: Sam Batschelet
QA Contact: ge liu
URL:
Whiteboard:
Depends On:
Blocks: 1970141
TreeView+ depends on / blocked
 
Reported: 2021-05-10 12:19 UTC by Lubov
Modified: 2021-07-27 23:08 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 23:07:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
etcd-master pod description (24.12 KB, text/plain)
2021-05-10 12:19 UTC, Lubov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift etcd pull 81 0 None open Bug 1958913: discover-etcd-initial-cluster: retry if member is not part of member list and dataDir exists 2021-05-12 01:52:31 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 23:08:08 UTC

Description Lubov 2021-05-10 12:19:06 UTC
Created attachment 1781673 [details]
etcd-master pod description

Description of problem:
After executing the procedure https://docs.openshift.com/container-platform/4.7/backup_and_restore/replacing-unhealthy-etcd-member.html#restore-replace-stopped-etcd-member_replacing-unhealthy-etcd-member for master node in NotReady state the newly created etcd-master pod is in CrashLoopBackOff state with error: "member "https://192.168.123.120:2380" is no longer a member of the cluster and should not start" for etcd container (where 192.168.123.120 is an IP of deleted etcd member) and new etcd member is not added

$ oc get bmh
NAME                   STATE                    CONSUMER                                  ONLINE   ERROR
openshift-master-0-0   externally provisioned   ocp-edge-cluster-0-tc57w-master-0-0       true     
openshift-master-0-1   externally provisioned   ocp-edge-cluster-0-tc57w-master-1         true     
openshift-master-0-2   externally provisioned   ocp-edge-cluster-0-tc57w-master-2         true     
openshift-worker-0-0   provisioned              ocp-edge-cluster-0-tc57w-worker-0-56bfg   true     
openshift-worker-0-1   provisioned              ocp-edge-cluster-0-tc57w-worker-0-v2vln   true

$ oc get machine
NAME                                      PHASE     TYPE   REGION   ZONE   AGE
ocp-edge-cluster-0-tc57w-master-0-0       Running                          9m51s
ocp-edge-cluster-0-tc57w-master-1         Running                          19h
ocp-edge-cluster-0-tc57w-master-2         Running                          19h
ocp-edge-cluster-0-tc57w-worker-0-56bfg   Running                          19h
ocp-edge-cluster-0-tc57w-worker-0-v2vln   Running                          19h

$ oc get machine
NAME                                      PHASE     TYPE   REGION   ZONE   AGE
ocp-edge-cluster-0-tc57w-master-0-0       Running                          9m51s
ocp-edge-cluster-0-tc57w-master-1         Running                          19h
ocp-edge-cluster-0-tc57w-master-2         Running                          19h
ocp-edge-cluster-0-tc57w-worker-0-56bfg   Running                          19h
ocp-edge-cluster-0-tc57w-worker-0-v2vln   Running                          19h

$ oc get pods -n openshift-etcd |grep etcd| egrep -v quorum
etcd-master-0-0                      2/3     CrashLoopBackOff   8          20m
etcd-master-0-1                      3/3     Running            0          21m
etcd-master-0-2                      3/3     Running            0          24m

etcd members list before procedure
+------------------+---------+------------+------------------------------+--------------------------------------------------------+------------+
|        ID        | STATUS  |    NAME    |          PEER ADDRS          |                      CLIENT ADDRS                      | IS LEARNER |
+------------------+---------+------------+------------------------------+--------------------------------------------------------+------------+
| 2d975c0a88dbf8e3 | started | master-0-0 | https://192.168.123.120:2380 | https://192.168.123.120:2379,unixs://192.168.123.120:0 |      false |
| 437eebe379eefcde | started | master-0-1 | https://192.168.123.137:2380 | https://192.168.123.137:2379,unixs://192.168.123.137:0 |      false |
| c5b1706d6685bd6a | started | master-0-2 | https://192.168.123.128:2380 | https://192.168.123.128:2379,unixs://192.168.123.128:0 |      false |
+------------------+---------+------------+------------------------------+--------------------------------------------------------+------------+

etcd member list after the procedure
+------------------+---------+------------+------------------------------+--------------------------------------------------------+------------+
|        ID        | STATUS  |    NAME    |          PEER ADDRS          |                      CLIENT ADDRS                      | IS LEARNER |
+------------------+---------+------------+------------------------------+--------------------------------------------------------+------------+
| 437eebe379eefcde | started | master-0-1 | https://192.168.123.137:2380 | https://192.168.123.137:2379,unixs://192.168.123.137:0 |      false |
| c5b1706d6685bd6a | started | master-0-2 | https://192.168.123.128:2380 | https://192.168.123.128:2379,unixs://192.168.123.128:0 |      false |
+------------------+---------+------------+------------------------------+--------------------------------------------------------+------------+

Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-05-09-105430

How reproducible:
100 %

Steps to Reproduce:
1. Get one of master nodes to NotReady (we set bmh online to false to simulate cause it)
2. Proceed with the procedure

Actual results:
see in description

Expected results:
new etcd-master pod starts and Running, new etcd member created, the cluster is healthy

Additional info:
adding must-gather and new etcd-master pod description

Comment 2 Sam Batschelet 2021-05-12 01:55:23 UTC
Thanks for the report. Could you please retest with the proposed fix?[1] By returning an error instead of retry we didn't give etcd time to scale up the pod.

[1] https://github.com/openshift/etcd/pull/81

Comment 3 Sam Batschelet 2021-05-12 01:57:34 UTC
Marking blocker - as there is a manual remedy which would include deleting the current data directory as outlined in the section

"Replacing an unhealthy etcd member whose etcd pod is crashlooping"

```
Move the etcd data directory to a different location:


sh-4.2# mv /var/lib/etcd/ /tmp
```

Comment 5 Lubov 2021-05-12 07:53:10 UTC
(In reply to Sam Batschelet from comment #3)
> Marking blocker - as there is a manual remedy which would include deleting
> the current data directory as outlined in the section
> 
> "Replacing an unhealthy etcd member whose etcd pod is crashlooping"
> 
> ```
> Move the etcd data directory to a different location:
> 
> 
> sh-4.2# mv /var/lib/etcd/ /tmp
> ```

tried this WA, it helped

Comment 13 ge liu 2021-05-25 03:42:00 UTC
Verified with comment 9

Comment 14 ge liu 2021-05-25 03:43:22 UTC
revising: Verified with comment 11

Comment 18 errata-xmlrpc 2021-07-27 23:07:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.