Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1541247 - service catalog still using ocp3.7 images after upgrade to ocp3.9
service catalog still using ocp3.7 images after upgrade to ocp3.9
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Service Broker (Show other bugs)
3.9.0
Unspecified Unspecified
high Severity high
: ---
: 3.9.0
Assigned To: Jeff Peeler
Jian Zhang
:
Depends On: 1547803
Blocks:
  Show dependency treegraph
 
Reported: 2018-02-02 00:32 EST by Zhang Cheng
Modified: 2018-03-28 10:26 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-03-28 10:25:43 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 None None None 2018-03-28 10:26 EDT

  None (edit)
Comment 1 Zhang Cheng 2018-02-02 00:34:52 EST
I have bug 1540840 to trace asb side, just want to trace service catalog side in here. Thanks.
Comment 2 Jeff Peeler 2018-02-13 09:05:46 EST
Merged 2/12: https://github.com/openshift/openshift-ansible/pull/7095
Comment 4 Jian Zhang 2018-02-22 04:35:29 EST
I used the v3.9.0-0.47.0.0 version for verifying this issue but got errors during the upgrading.

Jenkins job: https://openshift-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/AtomicOpenshiftUpdate/336/console

TASK [openshift_service_catalog : wait for api server to be ready] *************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_service_catalog/tasks/start_api_server.yml:11
FAILED - RETRYING: wait for api server to be ready (60 retries left).
FAILED - RETRYING: wait for api server to be ready (59 retries left).
FAILED - RETRYING: wait for api server to be ready (58 retries left).
FAILED - RETRYING: wait for api server to be ready (57 retries left).
FAILED - RETRYING: wait for api server to be ready (56 retries left).
FAILED - RETRYING: wait for api server to be ready (55 retries left).
FAILED - RETRYING: wait for api server to be ready (54 retries left).
FAILED - RETRYING: wait for api server to be ready (53 retries left).
FAILED - RETRYING: wait for api server to be ready (52 retries left).
fatal: [host-8-245-189.host.centralci.eng.rdu2.redhat.com]: FAILED! => {"attempts": 10, "changed": false, "connection": "close", "content": "[+]ping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-service-catalog-apiserver-informers ok\n[-]etcd failed: reason withheld\nhealthz check failed\n", "content_length": "180", "content_type": "text/plain; charset=utf-8", "date": "Thu, 22 Feb 2018 08:12:24 GMT", "msg": "Status code was not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "https://apiserver.kube-service-catalog.svc/healthz", "x_content_type_options": "nosniff"}

Already have a bug for tracking this, here: https://bugzilla.redhat.com/show_bug.cgi?id=1547803, I will verify this issue once bug 1547803 fixed.
Comment 5 Zhang Cheng 2018-03-01 03:12:44 EST
Changing status to "MODIFIED" since have block bug 1547803, not ready for test at present.
Comment 7 Jian Zhang 2018-03-07 03:06:01 EST
Jeff,

The openshift-ansible version:
openshift-ansible-3.9.3-1.git.0.e166207.el7.noarch

The tag of the service catalog is "v3.9.3" after upgrade to 3.9. LGTM.

[root@qe-wmengrpm37sc2-master-etcd-1 ~]# oc get daemonset controller-manager -o yaml -n kube-service-catalog | grep image
        image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
        imagePullPolicy: IfNotPresent
Comment 8 Jeff Peeler 2018-03-07 09:24:33 EST
Technically you should be checking the pods to ensure they are running at the correct version because it's possible that the daemonset was updated with the pods remaining on the old image. In this case, however, I know that the daemonset has an update strategy of rolling (instead of OnDelete), so it's ok.
Comment 9 Weihua Meng 2018-03-07 23:13:20 EST
Thanks for pointing it out, Jeff.

Checked again, expected image used for pods after upgrade.

[root@qe-wmengrpm37-master-etcd-1 ~]# oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
apiserver-d8nmm            1/1       Running   0          1h
controller-manager-qpl8x   1/1       Running   0          1h
[root@qe-wmengrpm37-master-etcd-1 ~]# oc get pods -o yaml | grep image:
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
Comment 10 Jian Zhang 2018-03-08 00:32:33 EST
@Jeff @Weihua, Thanks all for your information!
Comment 13 errata-xmlrpc 2018-03-28 10:25:43 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489

Note You need to log in before you can comment on or make changes to this bug.