Bug 1541247 - service catalog still using ocp3.7 images after upgrade to ocp3.9
Summary: service catalog still using ocp3.7 images after upgrade to ocp3.9
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Service Broker   
(Show other bugs)
Version: 3.9.0
Hardware: Unspecified Unspecified
high
high
Target Milestone: ---
: 3.9.0
Assignee: Jeff Peeler
QA Contact: Jian Zhang
URL:
Whiteboard:
Keywords:
Depends On: 1547803
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-02 05:32 UTC by Zhang Cheng
Modified: 2018-03-28 14:26 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-03-28 14:25:43 UTC
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 None None None 2018-03-28 14:26 UTC

Comment 1 Zhang Cheng 2018-02-02 05:34:52 UTC
I have bug 1540840 to trace asb side, just want to trace service catalog side in here. Thanks.

Comment 2 Jeff Peeler 2018-02-13 14:05:46 UTC
Merged 2/12: https://github.com/openshift/openshift-ansible/pull/7095

Comment 4 Jian Zhang 2018-02-22 09:35:29 UTC
I used the v3.9.0-0.47.0.0 version for verifying this issue but got errors during the upgrading.

Jenkins job: https://openshift-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/AtomicOpenshiftUpdate/336/console

TASK [openshift_service_catalog : wait for api server to be ready] *************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_service_catalog/tasks/start_api_server.yml:11
FAILED - RETRYING: wait for api server to be ready (60 retries left).
FAILED - RETRYING: wait for api server to be ready (59 retries left).
FAILED - RETRYING: wait for api server to be ready (58 retries left).
FAILED - RETRYING: wait for api server to be ready (57 retries left).
FAILED - RETRYING: wait for api server to be ready (56 retries left).
FAILED - RETRYING: wait for api server to be ready (55 retries left).
FAILED - RETRYING: wait for api server to be ready (54 retries left).
FAILED - RETRYING: wait for api server to be ready (53 retries left).
FAILED - RETRYING: wait for api server to be ready (52 retries left).
fatal: [host-8-245-189.host.centralci.eng.rdu2.redhat.com]: FAILED! => {"attempts": 10, "changed": false, "connection": "close", "content": "[+]ping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-service-catalog-apiserver-informers ok\n[-]etcd failed: reason withheld\nhealthz check failed\n", "content_length": "180", "content_type": "text/plain; charset=utf-8", "date": "Thu, 22 Feb 2018 08:12:24 GMT", "msg": "Status code was not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "https://apiserver.kube-service-catalog.svc/healthz", "x_content_type_options": "nosniff"}

Already have a bug for tracking this, here: https://bugzilla.redhat.com/show_bug.cgi?id=1547803, I will verify this issue once bug 1547803 fixed.

Comment 5 Zhang Cheng 2018-03-01 08:12:44 UTC
Changing status to "MODIFIED" since have block bug 1547803, not ready for test at present.

Comment 7 Jian Zhang 2018-03-07 08:06:01 UTC
Jeff,

The openshift-ansible version:
openshift-ansible-3.9.3-1.git.0.e166207.el7.noarch

The tag of the service catalog is "v3.9.3" after upgrade to 3.9. LGTM.

[root@qe-wmengrpm37sc2-master-etcd-1 ~]# oc get daemonset controller-manager -o yaml -n kube-service-catalog | grep image
        image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
        imagePullPolicy: IfNotPresent

Comment 8 Jeff Peeler 2018-03-07 14:24:33 UTC
Technically you should be checking the pods to ensure they are running at the correct version because it's possible that the daemonset was updated with the pods remaining on the old image. In this case, however, I know that the daemonset has an update strategy of rolling (instead of OnDelete), so it's ok.

Comment 9 Weihua Meng 2018-03-08 04:13:20 UTC
Thanks for pointing it out, Jeff.

Checked again, expected image used for pods after upgrade.

[root@qe-wmengrpm37-master-etcd-1 ~]# oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
apiserver-d8nmm            1/1       Running   0          1h
controller-manager-qpl8x   1/1       Running   0          1h
[root@qe-wmengrpm37-master-etcd-1 ~]# oc get pods -o yaml | grep image:
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3
      image: registry.reg-aws.openshift.com:443/openshift3/ose-service-catalog:v3.9.3

Comment 10 Jian Zhang 2018-03-08 05:32:33 UTC
@Jeff @Weihua, Thanks all for your information!

Comment 13 errata-xmlrpc 2018-03-28 14:25:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489


Note You need to log in before you can comment on or make changes to this bug.