Description of problem: Fail to upgrade ocp without docker registry deployed. Need to add a check if docker registry deployed on current ocp when redeploy it. TASK [Redeploy docker registry] ************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/private/redeploy-registry-certificates.yml:89 fatal: [x.x.x.x]: FAILED! => {"changed": true, "cmd": ["oc", "rollout", "latest", "dc/docker-registry", "--config=/tmp/openshift-ansible-YjNnLv/admin.kubeconfig", "-n", "default"], "delta": "0:00:00.320098", "end": "2018-03-21 23:22:38.128920", "msg": "non-zero return code", "rc": 1, "start": "2018-03-21 23:22:37.808822", "stderr": "Error from server (NotFound): deploymentconfigs.apps.openshift.io \"docker-registry\" not found", "stderr_lines": ["Error from server (NotFound): deploymentconfigs.apps.openshift.io \"docker-registry\" not found"], "stdout": "", "stdout_lines": []} - name: Redeploy docker registry command: > {{ openshift_client_binary }} rollout latest dc/docker-registry --config={{ mktemp.stdout }}/admin.kubeconfig -n default Version-Release number of the following components: openshift-ansible-3.9.13-1.git.0.47b616a.el7.noarch How reproducible: always Steps to Reproduce: 1. Install ocp without docker registry deployed. openshift_hosted_manage_registry=false 2. Upgrade above ocp 3. Actual results: Upgrade failed. Expected results: Upgrade succeed. Additional info: Please attach logs from ansible-playbook with the -vvv flag
https://github.com/openshift/openshift-ansible/pull/8354 proposed fix This fix is applicable to 3.9 as well
Version: ansible-2.4.4.0-1.el7ae.noarch openshift-ansible-3.10.0-0.50.0.git.0.bd68ade.el7.noarch Still failed to upgrade at the same task. TASK [Redeploy docker registry] ************************************************ task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/private/redeploy-registry-certificates.yml:90 Tuesday 22 May 2018 02:42:26 +0000 (0:00:00.071) 0:27:42.027 *********** fatal: [x.x.x.x]: FAILED! => {"changed": true, "cmd": ["oc", "rollout", "latest", "dc/docker-registry", "--config=/tmp/openshift-ansible-QAL8jR/admin.kubeconfig", "-n", "default"], "delta": "0:00:00.204963", "end": "2018-05-21 22:42:30.103079", "failed": true, "msg": "non-zero return code", "rc": 1, "start": "2018-05-21 22:42:29.898116", "stderr": "Error from server (NotFound): deploymentconfigs.apps.openshift.io \"docker-registry\" not found", "stderr_lines": ["Error from server (NotFound): deploymentconfigs.apps.openshift.io \"docker-registry\" not found"], "stdout": "", "stdout_lines": []} The playbook was not skipped in pr8354. Need to change to bool for openshift_hosted_manage_registry. And for pr8354, if the condition should be added for the whole playbook(../../../openshift-hosted/private/upgrade_poll_and_check_certs.yml) or just for docker-registry related tasks. I noticed that there are some tasks about routers in the playbook. Assign back.
Created https://github.com/openshift/openshift-ansible/pull/8472
Fix is available in openshift-ansible-3.10.0-0.51.0
Verified on openshift-ansible-3.10.0-0.53.0.git.0.53fe016.el7.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816