Bug 1523298 - Service catalog can not be installed after upgrading from v3.6 to v3.7
Summary: Service catalog can not be installed after upgrading from v3.6 to v3.7
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Service Broker
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 3.7.z
Assignee: Jeff Peeler
QA Contact: Jian Zhang
URL:
Whiteboard:
: 1522805 (view as bug list)
Depends On:
Blocks: 1534275
TreeView+ depends on / blocked
 
Reported: 2017-12-07 16:40 UTC by Alexis Solanas
Modified: 2021-12-10 15:28 UTC (History)
31 users (show)

Fixed In Version: openshift-ansible-3.7.24-1.git.0.18a2c6a.el7
Doc Type: Bug Fix
Doc Text:
The installer has been modified to turn on API aggregation for upgrades to 3.7, which is a required dependency for service catalog to work properly.
Clone Of:
Environment:
Last Closed: 2018-04-05 09:33:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ansible-playbook-vvvv (3.17 MB, text/plain)
2017-12-07 16:40 UTC, Alexis Solanas
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 01988813 0 None None None 2019-11-28 07:02:29 UTC
Red Hat Knowledge Base (Solution) 3349011 0 None None None 2018-03-15 16:57:15 UTC
Red Hat Knowledge Base (Solution) 3386851 0 None None None 2018-03-20 12:54:21 UTC
Red Hat Product Errata RHBA-2018:0636 0 None None None 2018-04-05 09:33:53 UTC

Description Alexis Solanas 2017-12-07 16:40:28 UTC
Created attachment 1364385 [details]
ansible-playbook-vvvv

Description of problem:

 Cluster upgrade from v3.6 to v3.7 worked correctly, but any attempt to install the service catalog fails: 

FAILED - RETRYING: wait for api server to be ready (1 retries left).                                      
fatal: [master1.36c.alxrh.ose]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "-k", "https://apiserver.kube-service-catalog.svc/healthz"], "delta": "0:00:01.012446", "end": "2017-12-07 15:57:06.057559", "failed": true, "msg": "non-zero return code", "rc": 7, "start": "2017-12-07 15:57:05.045113", "stderr": "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to apiserver.kube-service-catalog.svc:443; Connection refused", "stderr_lines": ["  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current", "                                 Dload  Upload   Total   Spent    Left  Speed", "", "  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0", "  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to apiserver.kube-service-catalog.svc:443; Connection refused"], "stdout": "", "stdout_lines": []}      



 The log of the apiserver pod shows that the requestheader-client-ca-file  doesn't exist:

1207 14:56:01.841961       1 round_trippers.go:445]     Cache-Control: no-store                          
I1207 14:56:01.841964       1 round_trippers.go:445]     Content-Type: application/json                   
I1207 14:56:01.841966       1 round_trippers.go:445]     Content-Length: 1442                             
I1207 14:56:01.841969       1 round_trippers.go:445]     Date: Thu, 07 Dec 2017 14:56:01 GMT              
I1207 14:56:01.841986       1 request.go:836] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"extension-apiserver-authentication","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication","uid":"1c5f97f6-db41-11e7-a05b-52540032d975","resourceVersion":"1357","creationTimestamp":"2017-12-07T11:23:55Z"},"data":{"client-ca-file":"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE1MTI2NDU3MDEwHhcNMTcxMjA3MTEyMTQwWhcNMjIxMjA2\nMTEyMTQxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTI2NDU3MDEw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXOtymj+4fYtC8mnFkzQVW\nvdUiL43OQXqtmoSHGPnm+OrRlJe0OfRHRZc75RfA/t/Y80D7Kzxrd5fViqZ3lpE1\ndzj1DldvgTeonMui30DTn6JaFeKd9VrCnnvQ94GbumDkAInGLQas0Mlz9hmHeVA4\nawgZQH2KU4JaI+tsb/FLgqJB8F7To7HX6+vchNzbrS8rziCUabdt32Z+joCKHA10\nwnN+oRQJz4culn9amVvSRSnWSeX+V613Np2OJSbeGzXch1n/Hri3rbcynhRGlP7T\nJdew+4Tr6bEbuQw3EwG1mdi3uJOfUFqMaryMEHbocPTN5E7qYxeynSC9ScFVAH3T\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQC2thVnKBlRWCi2GPLg9/j5h13DbrzWABT9nLVR0HrHUbbx\nsQpONO0dM64wx5EhcHNbtiwqoJgc99M1m1snC+U+joHpFGImvUdYVxn7ErkVGMq9\nypbUob6NSJ8S1x7Nu+TK6gmu838dCrl2fIc2rH71/EY410LnbxUxYnmRfaaCAUCr\nSet8ld+cHS+UjU9w9vPbexmIGaqrNtjZHyrurvO+oNXFFofcX0gNwDJKiBxUEPmk\nnViYstJI1xDpwOI6mQVCdT7/pgQTv0YJW+vTtCjHigOeEc24kIoZuIUI9A1C/Opj\nzzB531kk7YbIFSvFFyA0zzmRK+YSQuGl2Vlvkgr/\n-----END CERTIFICATE-----\n"}}                                          
Error: cluster doesn't provide requestheader-client-ca-file 




Version-Release number of the following components:

openshift-ansible-3.7.9-1.git.7.eedd332.el7.noarch
ansible-2.4.1.0-1.el7.noarch
ansible 2.4.1.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May  3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]


How reproducible:

 Always 

Steps to Reproduce:

1. Upgrade the cluster to v3.7 
2. Launch the playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.yml  to install the service catalog 

Actual results:

TASK [openshift_service_catalog : wait for api server to be ready] ****************************************************************************************************
FAILED - RETRYING: wait for api server to be ready (120 retries left).                                    
FAILED - RETRYING: wait for api server to be ready (119 retries left).                                                                                                                                               
FAILED - RETRYING: wait for api server to be ready (118 retries left).        
 [...]
FAILED - RETRYING: wait for api server to be ready (1 retries left).
fatal: [master1.36c.alxrh.ose]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "-k", "https://apiserver.kube-service-catalog.svc/healthz"], "delta": "0:00:01.012446", "end": "2017-12-07 15:57:06.05
7559", "failed": true, "msg": "non-zero return code", "rc": 7, "start": "2017-12-07 15:57:05.045113", "stderr": "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                   
              Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:-
-     0curl: (7) Failed connect to apiserver.kube-service-catalog.svc:443; Connection refused", "stderr_lines": ["  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current", "                
                 Dload  Upload   Total   Spent    Left  Speed", "", "  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0", "  0     0    0     0    0     0      0      0 --:--:--  0:00:0
1 --:--:--     0curl: (7) Failed connect to apiserver.kube-service-catalog.svc:443; Connection refused"], "stdout": "", "stdout_lines": []}
        to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.retry


Expected results:

 Service catalog is installed without errors


Additional info:

Comparing the status against a freshly installed 3.7 cluster, I see that the following files are missing under /etc/origin/master:

client-ca-bundle.crt  
front-proxy-ca.crt  
frontproxy-ca.crt  
front-proxy-ca.key  
frontproxy-ca.key  
frontproxy-ca.serial.txt  
openshift-aggregator.crt  
openshift-aggregator.key


Also, the extension-apiserver-authentication ConfigMap in the kube-system namespace doesn't include the certificate
for requestheader-client-ca-file: 

In the upgraded cluster it is:

apiVersion: v1
data:
  client-ca-file: |
    -----BEGIN CERTIFICATE-----
    MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
    c2hpZnQtc2lnbmVyQDE1MTI2NDU3MDEwHhcNMTcxMjA3MTEyMTQwWhcNMjIxMjA2
    MTEyMTQxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTI2NDU3MDEw
    ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDXOtymj+4fYtC8mnFkzQVW
    vdUiL43OQXqtmoSHGPnm+OrRlJe0OfRHRZc75RfA/t/Y80D7Kzxrd5fViqZ3lpE1
    dzj1DldvgTeonMui30DTn6JaFeKd9VrCnnvQ94GbumDkAInGLQas0Mlz9hmHeVA4
    awgZQH2KU4JaI+tsb/FLgqJB8F7To7HX6+vchNzbrS8rziCUabdt32Z+joCKHA10
    wnN+oRQJz4culn9amVvSRSnWSeX+V613Np2OJSbeGzXch1n/Hri3rbcynhRGlP7T
    Jdew+4Tr6bEbuQw3EwG1mdi3uJOfUFqMaryMEHbocPTN5E7qYxeynSC9ScFVAH3T
    AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
    SIb3DQEBCwUAA4IBAQC2thVnKBlRWCi2GPLg9/j5h13DbrzWABT9nLVR0HrHUbbx
    sQpONO0dM64wx5EhcHNbtiwqoJgc99M1m1snC+U+joHpFGImvUdYVxn7ErkVGMq9
    ypbUob6NSJ8S1x7Nu+TK6gmu838dCrl2fIc2rH71/EY410LnbxUxYnmRfaaCAUCr
    Set8ld+cHS+UjU9w9vPbexmIGaqrNtjZHyrurvO+oNXFFofcX0gNwDJKiBxUEPmk
    nViYstJI1xDpwOI6mQVCdT7/pgQTv0YJW+vTtCjHigOeEc24kIoZuIUI9A1C/Opj
    zzB531kk7YbIFSvFFyA0zzmRK+YSQuGl2Vlvkgr/
    -----END CERTIFICATE-----
kind: ConfigMap
metadata:
  creationTimestamp: 2017-12-07T11:23:55Z
  name: extension-apiserver-authentication
  namespace: kube-system
  resourceVersion: "1357"
  selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
  uid: 1c5f97f6-db41-11e7-a05b-52540032d975



In the fresh installation: 

apiVersion: v1
data:
  client-ca-file: |
    -----BEGIN CERTIFICATE-----
    MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
    c2hpZnQtc2lnbmVyQDE1MTI2NDYyOTIwHhcNMTcxMjA3MTEzMTMxWhcNMjIxMjA2
    MTEzMTMyWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTI2NDYyOTIw
    ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC4LUZQjMk6Fr4EpzWGV/n9
    iTv+H2cJ1VPN6o+VML4BgQgGzmoGp69ZyAlsZ1vQMU6+omQqUhi+cdluOdCOkOO0
    GtsB+o05i+Dh91OzujhYhNoPaFsVI92l9yOx8oKLAQL4P6Yjk7fcy8cu2oq5zycd
    9qe/rmGxISLr4LtNmUTUuJpjgGjT/++9RjmVuHkKQE0ZqJdprOGzBAd/oekrA1Nl
    hOU4BlRjZczGQR9JjT/LXjEaGP8J93osK3aG6sMq66oOVqtss+/lv6MZ4kAg+xX6
    QsPH3fY2y3nFBcawfaiAwqiyTRDf3d3pQwLirizVJdiZD+a4awlhuRcld4Tk5+Dl
    AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
    SIb3DQEBCwUAA4IBAQC2HGClXu7euONLb4WhlW8xcdkkhhyQ7U6V57JxDmnpRBxU
    5rL5wzEsPaAs2drlhBMqbsa6/lI3GyRnALW0DMQFX1eKGQwC2mbM6kaHIaD0Jk0n
    142oIK1FwTklReemckXyraliRJwflyZTnjEjD5b9EnrB3UrGm6EQN1FbmRH+MzQL
    2AtOszQ1RoI0D//V2KYn2QYOEudpPtdvRfU8pquwER6uJI/b14fB9XXIVhWoCPWE
    ebizo63HN+seg8yBGjbT8YOCO/g41LWuSJkLzWppwkynBtCvVs8TZGRn/aSCqmys
    U1abn5TemLXZWLWjomgHwhQZ8/s0SkhwuR4fyvb0
    -----END CERTIFICATE-----
  requestheader-allowed-names: '["aggregator-front-proxy"]'
  requestheader-client-ca-file: |
    -----BEGIN CERTIFICATE-----
    MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
    c2hpZnQtc2lnbmVyQDE1MTI2NDY1MzcwHhcNMTcxMjA3MTEzNTM2WhcNMjIxMjA2
    MTEzNTM3WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTI2NDY1Mzcw
    ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDDgpz8OgWP61SDsuk7O2o9
    4g27jREPiS8y1OFCKemodKnkNEZ4jQ97ZYkvIrQgA8nBWXY3w2cc/5Ue25TKY81q
    ndLwyY3m0oo0irPTLXCwUly5G62jEmjvFHg5cB+Xggdd/i4y/kYDtzLwdv8JA+Cv
    1C5ARwYpFjrwMhX1qoCO3B7I4bDcawo1BF9xMj0I9jV0GgcOKsDObnW+Hse8ncxt
    0jflQIWeYPDFQRpU+MDIqcBgSyGLhR6zwHtuet0Czk0m7g6FDN2sTF7RtFigmDWk
    CyFYDpmnTZbdVQZm65YGIjvXwtQgO8H0AscOc9yXUySlCSTFNC1oxw9ANX36fsTD
    AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
    SIb3DQEBCwUAA4IBAQC6DnMNcOBQSRhqyAcsZCumOQdaIPb0F0V6k1jYKgWdEPBn
    noE6zgArWYEGFsGiSyOSm7R+Pr597jNkMqsoRW5kIBr7zgebvESafEEW+7naZ7IP
    VLoEvnQF9YOnEQYUum7gf01K3HIq2zo8xmx9meUN95nnxH60sLszvkFYlS89r93H
    +2t1lcdM72FJjA1unmPKIKUtQMVjm8z28ZjpucdNuQP8y/IvMPmAKiIpx3hFarVB
    /F8xmqe33WHaToqLgFrnbu/p4tItTAAZJpJKJrSiDoRXtTsENns0HcQ1k0VQcEVs
    ecdSulg8pa7oCSz2eTE98ElatseN1HONcTGTXvyv
    -----END CERTIFICATE-----
  requestheader-extra-headers-prefix: '["X-Remote-Extra-"]'
  requestheader-group-headers: '["X-Remote-Group"]'
  requestheader-username-headers: '["X-Remote-User"]'
kind: ConfigMap
metadata:
  creationTimestamp: 2017-12-07T11:34:56Z
  name: extension-apiserver-authentication
  namespace: kube-system
  resourceVersion: "11037"
  selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
  uid: a64c8fa3-db42-11e7-b6ae-525400fa9407



ansible-playbook output attached

Comment 1 lvdevelop 2017-12-21 09:40:49 UTC
Did you make any progress here? 

I am also trying to install the ServiceCatalog after Upgrading to v 3.7 .

My Installation fails with the same errors that you get.

Comment 22 Jeff Peeler 2018-01-16 03:42:46 UTC
Posted upstream for review: https://github.com/openshift/openshift-ansible/pull/6736

Comment 29 Fabian von Feilitzsch 2018-01-24 15:25:51 UTC
Looking at c24, I wasn't able to see exactly what the problem was. It looks like the etcd PV bound properly, were the pods in the openshift-ansible-service-broker namespace in an error state?

This also should probably live in its own BZ, or we should move discussion to https://bugzilla.redhat.com/show_bug.cgi?id=1520291 if it's a dupe.

Comment 31 Jeff Peeler 2018-01-27 18:22:44 UTC
*** Bug 1522805 has been marked as a duplicate of this bug. ***

Comment 51 errata-xmlrpc 2018-04-05 09:33:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0636


Note You need to log in before you can comment on or make changes to this bug.