Bug 1611818 - Registry Console Not Upgraded to Current Version During Upgrade
Summary: Registry Console Not Upgraded to Current Version During Upgrade
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cluster Version Operator
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.11.0
Assignee: Russell Teague
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks: 1619405 1619408
TreeView+ depends on / blocked
 
Reported: 2018-08-02 19:34 UTC by Jack Ottofaro
Modified: 2021-09-09 15:16 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Image definition for registry-console was using a variable that only specified a major.minor. Updated to use openshift_image_tag which specifies the full major.minor.patch version.
Clone Of:
: 1619405 1619408 (view as bug list)
Environment:
Last Closed: 2018-10-11 07:23:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2652 0 None None None 2018-10-11 07:23:41 UTC

Description Jack Ottofaro 2018-08-02 19:34:04 UTC
Description of problem:

During a 3.9 patch level upgrade, e.g. 3.9.31 to 3.9.33, the registry console gets its tag from variable "openshift_upgrade_target" which is set to 3.9 rather than from "openshift_image_tag" (Router and Registry use the later):

https://github.com/openshift/openshift-ansible/blob/release-3.9/roles/openshift_hosted/tasks/upgrade_registry.yml#L39

- name: Update registry-console image to current version
  oc_edit:
    kind: dc
    name: registry-console
    namespace: default
    content:
      spec.template.spec.containers[0].image: "{{ l_osh_registry_console_image }}"
  vars:
    l_osh_registry_console_image: "{{ openshift_hosted_registry_registryurl | regex_replace ( '(origin|ose)-\\${component}', 'registry-console') |
                                      replace ( '${version}', 'v' ~ openshift_upgrade_target ) }}"
  when:
  - openshift_deployment_type != 'origin'
- _registry_console.results.results[0] != {}

Since "imagePullPolicy" is set to "IfNotPresent" and the Registry Console tag will already be set to v3.9 no new image will be deployed.

TASK [openshift_hosted : Check for registry-console] *********************************************************************************************************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/upgrade_registry.yml:22
ok: [tul1mdqarosm01.corporate.local] => {"changed": false, "failed": false, "results": {"cmd": "/bin/oc get dc registry-console -o json -n default", "results": [{"apiVersion": "apps.openshift.io/v1", "kind": "DeploymentConfig", "metadata": {"annotations": {"openshift.io/generated-by": "OpenShiftNewApp"}, "creationTimestamp": "2018-05-29T15:40:31Z", "generation": 1, "labels": {"app": "registry-console", "createdBy": "registry-console-template", "name": "registry-console"}, "name": "registry-console", "namespace": "default", "resourceVersion": "14578980", "selfLink": "/apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/registry-console", "uid": "9ee49c6c-6356-11e8-8ee0-0050569d3193"}, "spec": {"replicas": 1, "revisionHistoryLimit": 10, "selector": {"name": "registry-console"}, "strategy": {"activeDeadlineSeconds": 21600, "resources": {}, "rollingParams": {"intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1}, "type": "Rolling"}, "template": {"metadata": {"annotations": {"openshift.io/generated-by": "OpenShiftNewApp"}, "creationTimestamp": null, "labels": {"app": "registry-console", "name": "registry-console"}}, "spec": {"containers": [{"env": [{"name": "OPENSHIFT_OAUTH_PROVIDER_URL", "value": "https://console.qa-mds-openshift.tivo.com"}, {"name": "OPENSHIFT_OAUTH_CLIENT_ID", "value": "cockpit-oauth-client"}, {"name": "KUBERNETES_INSECURE", "value": "false"}, {"name": "COCKPIT_KUBE_INSECURE", "value": "false"}, {"name": "REGISTRY_ONLY", "value": "true"}, {"name": "REGISTRY_HOST", "value": "docker-registry-default.qa-mds-apps.tivo.com"}], "image": "registry.access.redhat.com/openshift3/registry-console:v3.9", "imagePullPolicy": "IfNotPresent", "livenessProbe": {"failureThreshold": 3, "httpGet": {"path": "/ping", "port": 9090, "scheme": "HTTP"}, "initialDelaySeconds": 10, "periodSeconds": 10, "successThreshold": 1, "timeoutSeconds": 5}, "name": "registry-console", "ports": [{"containerPort": 9090, "protocol": "TCP"}], "readinessProbe": {"failureThreshold": 3, "httpGet": {"path": "/ping", "port": 9090, "scheme": "HTTP"}, "periodSeconds": 10, "successThreshold": 1, "timeoutSeconds": 5}, "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "test": false, "triggers": [{"type": "ConfigChange"}]}, "status": {"availableReplicas": 1, "conditions": [{"lastTransitionTime": "2018-05-29T15:41:40Z", "lastUpdateTime": "2018-05-29T15:41:46Z", "message": "replication controller \"registry-console-1\" successfully rolled out", "reason": "NewReplicationControllerAvailable", "status": "True", "type": "Progressing"}, {"lastTransitionTime": "2018-07-19T16:02:37Z", "lastUpdateTime": "2018-07-19T16:02:37Z", "message": "Deployment config has minimum availability.", "status": "True", "type": "Available"}], "details": {"causes": [{"type": "ConfigChange"}], "message": "config change"}, "latestVersion": 1, "observedGeneration": 1, "readyReplicas": 1, "replicas": 1, "unavailableReplicas": 0, "updatedReplicas": 1}}], "returncode": 0}, "state": "list"}

TASK [openshift_hosted : Update registry-console image to current version] ***********************************************************************************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_hosted/tasks/upgrade_registry.yml:31
ok: [tul1mdqarosm01.corporate.local] => {"changed": false, "failed": false, "results": {"returncode": 0, "updated": false}, "state": "present"}

How reproducible:

Perform 3.9 patch level upgrade, e.g. from 3.9.31 to 3.9.33.

Actual results:

After upgrade Router and Docker Registry images are at 3.9.33 but Registry Console image is left at whatever version it was before upgrade. Note that it is always tagged as v3.9.

Expected results:

Although tagged as v3.9 the actual Registry Console image should be version 3.9.33.

Comment 1 Martin Pitt 2018-08-03 05:40:21 UTC
This is apparently not a bug in the registry console itself, but how it is deployed during upgrade. Reassigning to OCP team.

Comment 3 Russell Teague 2018-08-17 16:47:02 UTC
master: https://github.com/openshift/openshift-ansible/pull/9650

Comment 4 openshift-github-bot 2018-08-17 20:43:57 UTC
Commit pushed to master at https://github.com/openshift/openshift-ansible

https://github.com/openshift/openshift-ansible/commit/dc8fd36ebaf75dc154a81bfd9ad8705a83070795
Merge pull request #9650 from mtnbikenc/fix-1611818

[Bug 1611818] Use openshift_image_tag for registry-console upgrade

Comment 5 Russell Teague 2018-08-21 19:45:40 UTC
openshift-ansible-3.11.0-0.18.0

Comment 8 errata-xmlrpc 2018-10-11 07:23:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2652


Note You need to log in before you can comment on or make changes to this bug.