Red Hat Bugzilla – Bug 1479493
OCP upgrade and scale-up automatically upgrades RHEL when it should not
Last modified: 2017-08-29 16:34:31 EDT
Description of problem:
Scenario #1: when upgrading the OSD customers, it was observed that RHEL was upgrade (unexpectedly!) during the OCP upgrade. This is unacceptable from an Operations point of view; Ops tests against specific versions of the OS and to have the OS upgraded under the covers we lose the ability to create consistent environments for customers.
Scenario #2: when scaling up the nodes in a cluster, the new nodes are installed with the latest available version of the RHEL rather than the version of RHEL installed on the other masters, infra, and compute nodes.
Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
Steps to Reproduce:
1. install RHELv7.3 and OCP v3.4/5/6
2. upgrade OCP to version +1
3. scale up cluster by N nodes
No error produced.
The role in openshift-ansible that is upgrading the OS is, not surprisingly, os_update_latest:
This role is called in these 2 playbooks:
And these are included in a plethora of other playbooks, the most important being the following:
Diving father into the rabbit hole, these playbooks are included in even more playbooks.
No upgrade of RHEL unless specified.
Please attach logs from ansible-playbook with the -vvv flag
I meant to add this:
We require the ability to disable the os_update_latest role.
Can you please provide the list of playbooks that you invoked? These playbooks aren't included in any of the documented scaleup workflows.
Reviewing the ops playbooks you're calling os_update_latest role which has only one task and that task is to update all packages so this seems like a very deliberate action and the role has been largely unchanged for over two years.
We'll work on merging https://github.com/openshift/openshift-ansible/pull/5075 to make the role more accommodating to your needs but this to me is not a bug.