Bug 1516564
Summary: | components don't end up with same versions as core | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Erik M Jacobs <ejacobs> |
Component: | Installer | Assignee: | Russell Teague <rteague> |
Status: | CLOSED ERRATA | QA Contact: | Johnny Liu <jialiu> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.7.0 | CC: | aos-bugs, jokerman, lmeyer, mmccomas, sdodson, vrutkovs |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | 3.11.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
All component image definitions have been updated to use a standard pattern based on provided inventory variables. This provides a consistent image source and version for each component.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-11 07:19:06 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Erik M Jacobs
2017-11-22 23:01:21 UTC
Components that are built from oreg_url (default: openshift3/ose-${component}:${version}) get the version from the binary. Components "hosted" on top of OCP (logging, metrics, prometheus, registry-console, service catalog...) get the default in their template under openshift-ansible, unless you supply an override in your inventory. And every single one has a different override. OpenShift addons are intended to move towards being less tightly coupled rather than more tightly coupled. When the router and registry move to be more aligned with how other components are installed we'll similarly make them install whatever the latest v${major}.${minor} version is as well. If an admin chooses to override this with a specific version we'll of course respect that. The primary issue here, like with the other bug, is that the behaviors are unexpected/unpredictable and cause unexpected problems. For example, in a disconnected environment, this behavior would cause unexpected problems. If the intention is to make these components less tightly coupled, then we need even more explicit information, either in the examples or the documentation, about the levers, the intended behaviors, and the expected outcomes. This looks like a duplicate of #1530183 *** This bug has been marked as a duplicate of bug 1530183 *** I'd say instead that https://bugzilla.redhat.com/show_bug.cgi?id=1530183 is a single instance of the general problem outlined here. Perhaps we should point to the plan for addressing the general problem? I agree that this bug most thoroughly summarizes the general issue and scope. Bug 1530183 won't address the router and registry but should for everything else that's template based. This should be uniform with the exception of registry-console in 3.10. If openshift_image_tag is specified then that is used. Whenever specifying openshift_image_tag you should get image tags as defined. If you only specify openshift_release you'll end up with v3.10 or v3.11, etc. Registry console is being addressed in https://bugzilla.redhat.com/show_bug.cgi?id=1613100 Verify this bug with openshift-ansible-3.11.0-0.11.0.git.0.3c66516None, and PASS. Trigger an install without openshift_image_tag setting, after installation, checking. # oc describe po apiserver-hmjbj -n kube-service-catalog|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.11 # oc describe po router-1-ckdnn|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.11 # oc describe po docker-registry-1-rflt2|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.11 All the component is using v3.11 image tag. When specify openshift_image_tag=v3.11.0-0.10.0.0 in inventory file, trigger an install on AH, all the components are using the specified image tag. [root@ip-172-18-9-130 ~]# oc describe po docker-registry-1-25tlt|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.11.0-0.10.0.0 [root@ip-172-18-9-130 ~]# oc describe po router-1-t4xxv|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.11.0-0.10.0.0 Follow comment 11, continue... [root@ip-172-18-9-130 ~]# oc describe po webconsole-64f88cc59d-z7rrq -n openshift-web-console|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-web-console:v3.11.0-0.10.0.0 [root@ip-172-18-9-130 ~]# oc describe po apiserver-66mwm -n kube-service-catalog|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.11.0-0.10.0.0 [root@ip-172-18-9-130 ~]# oc describe po asb-1-thngp -n openshift-ansible-service-broker|grep Image: Image: registry.reg-aws.openshift.com:443/openshift3/ose-ansible-service-broker:v3.11.0-0.10.0.0 So move this bug to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2652 |