Bug 1575931
Summary: | mismatched version of atomic-openshift between master and node | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Johnny Liu <jialiu> |
Component: | Documentation | Assignee: | Alex Dellapenta <adellape> |
Status: | CLOSED DEFERRED | QA Contact: | Vikram Goyal <vigoyal> |
Severity: | medium | Docs Contact: | Vikram Goyal <vigoyal> |
Priority: | medium | ||
Version: | 3.10.0 | CC: | aos-bugs, jokerman, lxia, mmccomas, wmeng |
Target Milestone: | --- | ||
Target Release: | 3.10.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-11-20 18:52:12 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Johnny Liu
2018-05-08 10:30:57 UTC
This will not be the case for most production users. Our public registries and repos will typically be in sync. I think it's unlikely we'll re-add this code given the direction we're moving. (In reply to Michael Gugino from comment #1) > This will not be the case for most production users. Our public registries > and repos will typically be in sync. I could figure out several other scenarios to make such case happen. scenarios 1: run a 3.10 installation with openshift_pkg_version=-3.10.0 (but the latest version is 3.10.1), after installation, master pod is using v3.10 image, which actually is v3.10.1 image. scenario 2: when 3.10.0 is released out, user run a fresh install, node is running 3.10.0 bin, master pod is running with v3.10 image, which actually is v3.10.0 image, some days later, v3.10.1 is released out, master pod is created, then at that time, it will pull v3.10.1 image to run. > I think it's unlikely we'll re-add this code given the direction we're > moving. I am not requesting to re-add this code back, but some original code really could prevent such things happen to some extent, such as: "Fail if rpm version and docker image version are different" task in original code. Actually this bug's the core question is setting 'v3.10' as default openshift_image_tag by installer is proper. As a QE, I just want to raise up such issue earlier to avoid customer complain why such issue does not happened in old version of OCP, but happen in newer version of OCP. I think this is best covered with documentation. Best practice is to set: openshift_image_tag: v3.10.0 openshift_pkg_version: -3.10.0 openshift_release: "3.10" Otherwise, you are indicating you want 'whatever the latest released 3.10 bits are'. If that, it is better to turn this bug into a doc bug. Need to document that openshift_pkg_version in 3.10 only affects the node packages and that openshift_image_tag is used for all other components that run as pods. OCP 3.6-3.10 is no longer on full support [1]. Marking CLOSED DEFERRED. If you have a customer case with a support exception or have reproduced on 3.11+, please reopen and include those details. When reopening, please set the Target Release to the appropriate version where needed. [1]: https://access.redhat.com/support/policy/updates/openshift |