Bug 1822924

Summary: etcd backup script request snapshot backup from a single endpoint not multiple
Product: OpenShift Container Platform Reporter: Suresh Kolichala <skolicha>
Component: EtcdAssignee: Suresh Kolichala <skolicha>
Status: CLOSED ERRATA QA Contact: ge liu <geliu>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.5   
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: The backup script was invoked with all ETCD endpoints. Consequence: This was overlooked in etcd 3.3.x versions, but in etcd 3.4.x, this invocation results in failure. Fix: Set the ETCDCTL_ENDPOINTS to a single member. Result: backups are successfully taken.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-13 17:26:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Suresh Kolichala 2020-04-10 14:05:27 UTC
Description of problem:
By default, etcdctl is invoked with ETCDCTL_ENDPOINTS set to all etcd members. However, when invoking snapshot backup, it should be invoked with a single endpoint. This is strictly enforced in the new etcd versions of 3.4.x.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Login to one of the master hosts
2. Attempt to take a backup by invoking cluster-backup.sh

Actual results:
In 4.5, where etcd is bumped to use 3.4.7, it fails with an error:
Error: snapshot must be requested to one selected node, not multiple [https://10.0.145.58:2379 https://10.0.134.79:2379 https://10.0.137.186:2379 https://10.0.5.140:2379]

Expected results:
snapshot db and kube resources are successfully saved to backup directory specified.

Additional info:

Comment 7 errata-xmlrpc 2020-07-13 17:26:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409