Description of problem: This bug is different with https://bugzilla.redhat.com/show_bug.cgi?id=1574899, this bug is used for tracking master/node version mismatch issue. Version-Release number of the following components: openshift-ansible-3.10.0-0.32.0.git.0.bb50d68.el7.noarch How reproducible: Always Steps to Reproduce: 1. configure yum repo which is pointing to old version atomic-openshift, such as: 3.10.0, currently the latest v3.10 images are built with atomic-openshift 3.10.1 2. Set the following option in inventory file without setting openshift_image_tag 3. trigger installation 4. Actual results: node kubelet bin is installed from 3.10.0 rpm package, master pod is running using ose-control-plane:v3.10, which is built with atomic-openshift 3.10.1, that means node and master service is running different version. Expected results: openshift version for master and node should be the same. Additional info: installer should have some version check just like https://github.com/openshift/openshift-ansible/pull/7699/files#diff-43777e49394f32a80ac73f42c351be9bL33, which is already removed.
This will not be the case for most production users. Our public registries and repos will typically be in sync. I think it's unlikely we'll re-add this code given the direction we're moving.
(In reply to Michael Gugino from comment #1) > This will not be the case for most production users. Our public registries > and repos will typically be in sync. I could figure out several other scenarios to make such case happen. scenarios 1: run a 3.10 installation with openshift_pkg_version=-3.10.0 (but the latest version is 3.10.1), after installation, master pod is using v3.10 image, which actually is v3.10.1 image. scenario 2: when 3.10.0 is released out, user run a fresh install, node is running 3.10.0 bin, master pod is running with v3.10 image, which actually is v3.10.0 image, some days later, v3.10.1 is released out, master pod is created, then at that time, it will pull v3.10.1 image to run. > I think it's unlikely we'll re-add this code given the direction we're > moving. I am not requesting to re-add this code back, but some original code really could prevent such things happen to some extent, such as: "Fail if rpm version and docker image version are different" task in original code. Actually this bug's the core question is setting 'v3.10' as default openshift_image_tag by installer is proper. As a QE, I just want to raise up such issue earlier to avoid customer complain why such issue does not happened in old version of OCP, but happen in newer version of OCP.
I think this is best covered with documentation. Best practice is to set: openshift_image_tag: v3.10.0 openshift_pkg_version: -3.10.0 openshift_release: "3.10" Otherwise, you are indicating you want 'whatever the latest released 3.10 bits are'.
If that, it is better to turn this bug into a doc bug.
Need to document that openshift_pkg_version in 3.10 only affects the node packages and that openshift_image_tag is used for all other components that run as pods.
OCP 3.6-3.10 is no longer on full support [1]. Marking CLOSED DEFERRED. If you have a customer case with a support exception or have reproduced on 3.11+, please reopen and include those details. When reopening, please set the Target Release to the appropriate version where needed. [1]: https://access.redhat.com/support/policy/updates/openshift