Bug 1694724 - [TSB] Ansible playbooks are overwriting the var 'template_service_broker_selector' making the deployment fail
Summary: [TSB] Ansible playbooks are overwriting the var 'template_service_broker_sele...
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Service Broker
Version: 3.11.0
Hardware: x86_64
OS: Linux
Target Milestone: ---
: 3.11.z
Assignee: Shawn Hurley
QA Contact: Zihan Tang
Depends On:
TreeView+ depends on / blocked
Reported: 2019-04-01 13:18 UTC by Andre Costa
Modified: 2019-06-26 09:08 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Last Closed: 2019-06-26 09:07:55 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:1605 0 None None None 2019-06-26 09:08:04 UTC

Description Andre Costa 2019-04-01 13:18:56 UTC
Description of problem:
Ansible-playbook duplicates the nodeSelector on the daemonset for template service broker making the pods fail to start. The installer adds the label 'node-role.kubernetes.io/master=true' to the daemonset of the apiserver pods, so when using the variable to set a nodeSelector, e.g. template_service_broker_selector={"node-role.kubernetes.io/infra": "true"}, the daemonset will end up with both labels on the nodeSelector making the pods to not start.

Version-Release number of the following components:

# ansible --version
ansible 2.6.15
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]

How reproducible:
Every time

Steps to Reproduce:
1. Set var template_service_broker_selector={"node-role.kubernetes.io/infra": "true"} on inventory
2. Run the openshift-service-catalog/config.yml playbook
3. Check the daemonset on the project:
  # oc get all -n openshift-template-service-broker

Actual results:
Daemonset will have 2 nodeSelectors, pods won't start and the playbook will fail on TASK [template_service_broker : Verify that TSB is running]

Expected results:
Use only the value set for 'template_service_broker_selector' variable

Additional info:
Issue seems to be:

 # grep -e 'Apply template file' -A7 /usr/share/ansible/openshift-ansible/roles/template_service_broker/tasks/deploy.yml
- name: Apply template file
  shell: >
    {{ openshift_client_binary }} process --config={{ mktemp.stdout }}/admin.kubeconfig
    -f "{{ mktemp.stdout }}/{{ __tsb_template_file }}" -n openshift-template-service-broker
    --param API_SERVER_CONFIG="{{ config['content'] | b64decode }}"
    --param IMAGE="{{ template_service_broker_image }}"
    --param NODE_SELECTOR={{ {'node-role.kubernetes.io/master':'true'} | to_json | quote }}
    | {{ openshift_client_binary }} apply --config={{ mktemp.stdout }}/admin.kubeconfig -f -

Comment 1 Shawn Hurley 2019-04-23 14:03:42 UTC
I have been unable to reproduce this bug locally. QA can you validate that you can reproduce?

Comment 2 Zihan Tang 2019-04-28 06:21:15 UTC
As I know, template-service-broker works on `master` node by default.
I tried to add parameters template_service_broker_selector={"node-role.kubernetes.io/compute": "true"} in playbook, the playbook passed, but parameter not works, it still works on master.

# oc get all -n openshift-template-service-broker
NAME                  READY     STATUS    RESTARTS   AGE
pod/apiserver-h9ghc   1/1       Running   0          2h

NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/apiserver   ClusterIP   <none>        443/TCP   2h

NAME                       DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR                         AGE
daemonset.apps/apiserver   1         1         1         1            1           node-role.kubernetes.io/master=true   2h

If you want to let it works on node, you can edit deamonset manually. it works.
        node-role.kubernetes.io/compute: "true"

openshift v3.11.98

Comment 4 errata-xmlrpc 2019-06-26 09:07:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.