Description of problem: When running redeploy-openshift-ca.yml playbook against an ocp-3.5 env, one master embedded with etcd and one node, the playbook will be failed when restarting atomic-openshift-master.service in the end. Version-Release number of selected component (if applicable): openshift-ansible-3.5.5-1.git.0.3ae2138.el7.noarch How reproducible: Always Steps to Reproduce: 1.Run 'ansible-playbook -i host playbooks/byo/openshift-cluster/redeploy-openshift-ca.yml' Actual results: TASK [Restart master] ********************************************************** fatal: [x.compute-1.amazonaws.com]: FAILED! => { "changed": false, "failed": true } MSG: Unable to restart service atomic-openshift-master: Job for atomic-openshift-master.service failed because the control process exited with error code. See "systemctl status atomic-openshift-master.service" and "journalctl -xe" for details. [root@ip-172-18-4-128 master]# journalctl -u atomic-openshift-master ... Feb 09 01:32:05 ip-172-18-4-128.ec2.internal atomic-openshift-master[21406]: E0209 01:32:05.203533 21406 reflector.go:199] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:75: Failed to list *storage.StorageClass: Get https://ip-172-18-4-128.ec2.internal:8443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 172.18.4.128:8443: getsockopt: connection refused Feb 09 01:32:05 ip-172-18-4-128.ec2.internal atomic-openshift-master[21406]: E0209 01:32:05.203590 21406 reflector.go:199] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get https://ip-172-18-4-128.ec2.internal:8443/api/v1/serviceaccounts?resourceVersion=0: dial tcp 172.18.4.128:8443: getsockopt: connection refused Feb 09 01:32:05 ip-172-18-4-128.ec2.internal atomic-openshift-master[21406]: E0209 01:32:05.203638 21406 reflector.go:199] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get https://ip-172-18-4-128.ec2.internal:8443/api/v1/resourcequotas?resourceVersion=0: dial tcp 172.18.4.128:8443: getsockopt: connection refused Feb 09 01:32:05 ip-172-18-4-128.ec2.internal atomic-openshift-master[21406]: E0209 01:32:05.208603 21406 reflector.go:199] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get https://ip-172-18-4-128.ec2.internal:8443/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 172.18.4.128:8443: getsockopt: connection refused Feb 09 01:32:05 ip-172-18-4-128.ec2.internal atomic-openshift-master[21406]: F0209 01:32:05.209497 21406 start_master.go:112] could not reach etcd: client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate signed by unknown authority Feb 09 01:32:05 ip-172-18-4-128.ec2.internal systemd[1]: atomic-openshift-master.service: main process exited, code=exited, status=255/n/a Feb 09 01:32:05 ip-172-18-4-128.ec2.internal systemd[1]: Failed to start Atomic OpenShift Master. Feb 09 01:32:05 ip-172-18-4-128.ec2.internal systemd[1]: Unit atomic-openshift-master.service entered failed state. Feb 09 01:32:05 ip-172-18-4-128.ec2.internal systemd[1]: atomic-openshift-master.service failed. Expected results: Additional info:
Proposed fix: https://github.com/openshift/openshift-ansible/pull/3312
Verify this bug with openshift-ansible-3.5.6-1.git.0.5e6099d.el7.noarch For new ocp-3.5 cluster installed by openshift-ansible-3.5.6-1, it's using ca-bundle.crt as etcd client CA cert, after running redeploy-openshift-ca.yml, master and etcd are working well. For old ocp-3.5 cluster installed by openshift-ansible-3.5.5-1 or earlier version, it was using ca.crt as etcd client CA cert, redeploy-openshift-ca.yml could run successfully against the env, etcd clientCA changed as ca-bundle.crt, master and etcd are still working well.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0903