Block container upgrade
PR Created: https://github.com/openshift/openshift-ansible/pull/6842
Fixed. openshift-ansible-3.9.0-0.24.0.git.0.735690f.el7.noarch PLAY [Verify masters are already upgraded] ************************************************************************************************************************************************************************ TASK [fail] ******************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/config.yml:60 skipping: [host-8-244-181.host.centralci.eng.rdu2.redhat.com] => { "changed": false, "skip_reason": "Conditional result was False" } META: ran handlers META: ran handlers PLAY [Validate configuration for rolling restart] *****************************************************************************************************************************************************************
Sorry. Wrong steps used above. Still got error. openshift-ansible-3.9.0-0.24.0.git.0.735690f.el7.noarch TASK [fail] ******************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/config.yml:60 fatal: [host-8-244-181.host.centralci.eng.rdu2.redhat.com]: FAILED! => { "changed": false, "msg": "Master running 3.9 must be upgraded to 3.9.0 before node upgrade can be run." } to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_nodes.retry
Please include full verbose output of the play run. Also, how are the masters being upgraded? Is that happening during the same test? I need to see inventory, procedure steps, and output for that as well.
1. set up OCP 3.7 on Atomic Host 2. ansible-playbook -vvv -i ah3726.inv /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_control_plane.yml | tee up39024ah_ctrl_01.log 3. ansible-playbook -vvv -i ah3726.inv /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_nodes.yml | tee up39024ah_nodes.log
Weihua, Thank you for the attachments, they are very helpful. It seems this is an edge cause when upgrading from 3.8 to 3.9 instead of 3.7 to 3.9. I have created additional logic to account for both scenarios regarding the usage of openshift_image_tag: https://github.com/openshift/openshift-ansible/pull/6896
Still hit the issue on latest build openshift-ansible-3.9.0-0.31.0.git.0.e0a0ad8.el7.noarch.
Blocker containerized ocp's upgrade test
PR from comment 10 has been merged but not in a tagged build yet, MODIFIED
not in latest build. waiting for new build.
Has been merged since openshift-ansible-3.9.0-0.32
There is still something wrong. Please check it. Thanks. openshift-ansible-3.9.0-0.41.0.git.0.8290c01.el7.noarch containerized installation on Atomic Host with openshift_image_tag=v3.9.0-0.41.0 in inventory file. other parameters are same with before. TASK [fail] ******************************************************************************************************************************************************************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/pre/config.yml:60 fatal: [host-xxx.redhat.com]: FAILED! => { "changed": false, "msg": "Master running 3.9.0 must be upgraded to 3.6.173.0.96 before node upgrade can be run." Failure summary: 1. Hosts: host-xxx.redhat.com Play: Verify masters are already upgraded Task: fail Message: Master running 3.9.0 must be upgraded to 3.6.173.0.96 before node upgrade can be run.
This should be fixed by: https://github.com/openshift/openshift-ansible/pull/7124 PR Merged.
Fixed. openshift-ansible-3.9.0-0.45.0.git.0.05f6826.el7.noarch Upgrade succeeded without error. Thanks.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0489