Bug 2095114 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined
Summary: [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/st...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Etcd
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.z
Assignee: Vincent Lours
QA Contact: ge liu
URL:
Whiteboard:
Depends On: 2090929
Blocks: 2095115
TreeView+ depends on / blocked
 
Reported: 2022-06-09 04:44 UTC by OpenShift BugZilla Robot
Modified: 2023-02-13 14:30 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-28 11:50:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-etcd-operator pull 848 0 None Merged [release-4.10] Bug 2095114: cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder... 2022-06-16 11:10:19 UTC
Red Hat Product Errata RHBA-2022:5172 0 None None None 2022-06-28 11:51:10 UTC

Comment 4 Sandeep 2022-06-16 11:01:38 UTC
Is the status expected to be ON_QA if the bug has been dropped?
The above issue continues to exist on the latest successful 4.10 build (https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/) as of today. 
(4.10.0-0.nightly-2022-06-08-150219).



steps followed:

1. connected to the master node.

2. created a directory /etc/kubernetes/static-pod-certs

3. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup

found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-9
found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-6
found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6
found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-7
d0c98efc6bf85d8318d8982aba3b5f18a103c4d8b6bdbafa370fd1adb4e2f8cf
etcdctl version: 3.5.3
API version: 3.5
Error: open /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-skundu-dbg-4bjsq-master-0.c.openshift-qe.internal.crt: no such file or directory

Comment 5 Sandeep 2022-06-16 11:13:48 UTC
I see the above PR has been merged yesterday. I will wait for the successful build after that to verify.

Comment 6 Sandeep 2022-06-20 10:40:27 UTC
oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2022-06-16-134011   True        False         15m     Cluster version is 4.10.0-0.nightly-2022-06-16-134011



It works fine as expected.


steps followed: 


1. connect to the master-node

2. mkdir /etc/kubernetes/static-pod-certs


3. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup
Certificate /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt is missing. Checking in different directory
Certificate /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/etcd-serving-ca/ca-bundle.crt found!
found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-8
found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7
found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-8
found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-8
f3865da10bf44602f62dd846bdfcab8da844ef5c60ee8d3747d020fd9acf7e6e
etcdctl version: 3.5.3
API version: 3.5
{"level":"info","ts":"2022-06-20T10:31:31.061Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2022-06-20_103129.db.part"}
{"level":"info","ts":"2022-06-20T10:31:31.072Z","logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2022-06-20T10:31:31.072Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"}
{"level":"info","ts":"2022-06-20T10:31:31.784Z","logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2022-06-20T10:31:31.987Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"70 MB","took":"now"}
{"level":"info","ts":"2022-06-20T10:31:31.988Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/home/core/assets/backup/snapshot_2022-06-20_103129.db"}
Snapshot saved at /home/core/assets/backup/snapshot_2022-06-20_103129.db
Deprecated: Use `etcdutl snapshot status` instead.

{"hash":3184594419,"revision":30517,"totalKey":14120,"totalSize":70283264}
snapshot db and kube resources are successfully saved to /home/core/assets/backup

sh-4.4# ls /home/core/assets/backup
snapshot_2022-06-20_103129.db  static_kuberesources_2022-06-20_103129.tar.gz


rm -rf /etc/kubernetes/static-pod-certs
/usr/local/bin/cluster-backup.sh /home/core/assets/backup
Certificate /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt is missing. Checking in different directory
Certificate /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/etcd-serving-ca/ca-bundle.crt found!
found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-8
found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7
found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-8
found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-8
etcdctl is already installed
{"level":"info","ts":"2022-06-20T10:38:24.943Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2022-06-20_103824.db.part"}
{"level":"info","ts":"2022-06-20T10:38:24.952Z","logger":"client","caller":"v3/maintenance.go:211","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2022-06-20T10:38:24.952Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"}
{"level":"info","ts":"2022-06-20T10:38:25.631Z","logger":"client","caller":"v3/maintenance.go:219","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2022-06-20T10:38:25.841Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"70 MB","took":"now"}
{"level":"info","ts":"2022-06-20T10:38:25.841Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/home/core/assets/backup/snapshot_2022-06-20_103824.db"}
Snapshot saved at /home/core/assets/backup/snapshot_2022-06-20_103824.db
Deprecated: Use `etcdutl snapshot status` instead.

{"hash":3675828109,"revision":32649,"totalKey":12794,"totalSize":70283264}
snapshot db and kube resources are successfully saved to /home/core/assets/backup
sh-4.4# ls /home/core/assets/backup
snapshot_2022-06-20_103129.db  snapshot_2022-06-20_103824.db  static_kuberesources_2022-06-20_103129.tar.gz  static_kuberesources_2022-06-20_103824.tar.gz

Comment 9 errata-xmlrpc 2022-06-28 11:50:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.10.20 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5172


Note You need to log in before you can comment on or make changes to this bug.