Bug 1420625
| Summary: | [intservice_public_324] Logging deployment will fail if openshift_logging_fluentd_nodeselector is specified in inventory file | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Xia Zhao <xiazhao> | ||||
| Component: | Logging | Assignee: | Jeff Cantrill <jcantril> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Xia Zhao <xiazhao> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.5.0 | CC: | aos-bugs, bandrade, dsulliva, jcantril, jokerman, mmccomas, pdwyer | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 3.5.z | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | No Doc Update | |||||
| Doc Text: |
undefined
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2017-10-25 13:00:48 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Xia Zhao
2017-02-09 06:20:19 UTC
This is not really a bug. A nodeSelector can be a complicated hash and should be passed like labels as described here: https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html#configuring-ansible node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" The 'syntax' has changed from 3.4 to 3.5 because the deployer passed the value as a string that was placed in the template where 3.5 uses a proper hash to allow a more complex selector. Putting a bug onto MODIFIED is a trigger for it to go onto an errata. This bug doesn't look like it's suppose to do that. What is the next step for this bug? Moving to ASSIGNED until we determine what to do with this bug. Moving to 'ON_QA' for re-evaluation. Modified the inventory file to
openshift_logging_fluentd_nodeselector={"logging-infra-fluentd-test": "true"}
and logging deployment finished successfully (with the repro of https://bugzilla.redhat.com/show_bug.cgi?id=1419811). Set to verified, thanks.
It's not clear what the status of this bug is.
I'm currently hitting this issue now with 3.5 Advanced Ansible Install, really OpenShift on AWS refarch install.
The vars are configured using yaml format.
I have other nodeselectors that work fine, so not sure why the fluentd nodeselector is not working.
e.g.
This works
openshift_registry_selector: "role=infraregistry"
This doesn't
openshift_logging_fluentd_nodeselector: "role=infraregistry"
[ec2-user@adpuse1 playbooks]$ cat openshift-setup.yaml
---
- include: /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
vars:
debug_level: 2
openshift_debug_level: "{{ debug_level }}"
openshift_node_debug_level: "{{ node_debug_level | default(debug_level, true) }}"
openshift_node_kubelet_args:
node-labels:
- "role={{ openshift_node_labels.role }}"
openshift_master_debug_level: "{{ master_debug_level | default(debug_level, true) }}"
openshift_master_access_token_max_seconds: 2419200
openshift_master_api_port: "{{ console_port }}"
openshift_master_console_port: "{{ console_port }}"
osm_cluster_network_cidr: "{{ ocp_cluster_pod_cidr }}"
openshift_portal_net: "{{ ocp_cluster_service_cidr }}"
osm_host_subnet_length: 6
openshift_registry_selector: "role=infraregistry"
openshift_logging_install_logging: true
openshift_hosted_logging_deploy: true
openshift_logging_curator_nodeselector: "role=infraregistry"
openshift_logging_kibana_nodeselector: "role=infraregistry"
openshift_logging_es_nodeselector: "role=infraregistry"
openshift_logging_fluentd_nodeselector: "role=infraregistry"
..l
openshift_router_selector: "role=infrarouter"
openshift_hosted_router_replicas: 3
openshift_hosted_registry_replicas: 3
openshift_master_cluster_method: native
openshift_node_local_quota_per_fsgroup: 512Mi
openshift_master_cluster_hostname: "{{ cluster_shortname }}.{{ public_hosted_zone }}"
openshift_master_cluster_public_hostname: "{{ cluster_shortname }}.{{ public_hosted_zone }}"
openshift_default_registry: "docker-registry.default.svc.cluster.local:5000"
osm_default_subdomain: "{{ wildcard_zone }}"
openshift_hostname: "{{ inventory_hostname }}"
openshift_master_default_subdomain: "{{osm_default_subdomain}}"
osm_default_node_selector: "role=app"
deployment_type: openshift-enterprise
os_sdn_network_plugin_name: "redhat/{{ openshift_sdn }}"
openshift_master_identity_providers:
…
osm_use_cockpit: true
containerized: false
openshift_hosted_registry_storage_kind: object
openshift_hosted_registry_storage_provider: s3
openshift_hosted_registry_storage_s3_accesskey: "{{ hostvars['localhost']['s3user_id'] }}"
openshift_hosted_registry_storage_s3_secretkey: "{{ hostvars['localhost']['s3user_secret'] }}"
openshift_hosted_registry_storage_s3_bucket: "{{ hostvars['localhost']['s3_bucket_name'] }}"
openshift_hosted_registry_storage_s3_region: "{{ hostvars['localhost']['region'] }}"
openshift_hosted_registry_storage_s3_chunksize: 26214400
openshift_hosted_registry_storage_s3_rootdirectory: /registry
openshift_hosted_registry_pullthrough: true
openshift_hosted_registry_acceptschema2: true
openshift_hosted_registry_enforcequota: true
It's not clear what the status of this bug is.
I'm currently hitting this issue now with 3.5 Advanced Ansible Install, really OpenShift on AWS refarch install.
The vars are configured using yaml format.
I have other nodeselectors that work fine, so not sure why the fluentd nodeselector is not working.
e.g.
This works
openshift_registry_selector: "role=infraregistry"
This doesn't
openshift_logging_fluentd_nodeselector: "role=infraregistry"
[ec2-user@adpuse1 playbooks]$ cat openshift-setup.yaml
---
- include: /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
vars:
debug_level: 2
openshift_debug_level: "{{ debug_level }}"
openshift_node_debug_level: "{{ node_debug_level | default(debug_level, true) }}"
openshift_node_kubelet_args:
node-labels:
- "role={{ openshift_node_labels.role }}"
openshift_master_debug_level: "{{ master_debug_level | default(debug_level, true) }}"
openshift_master_access_token_max_seconds: 2419200
openshift_master_api_port: "{{ console_port }}"
openshift_master_console_port: "{{ console_port }}"
osm_cluster_network_cidr: "{{ ocp_cluster_pod_cidr }}"
openshift_portal_net: "{{ ocp_cluster_service_cidr }}"
osm_host_subnet_length: 6
openshift_registry_selector: "role=infraregistry"
openshift_logging_install_logging: true
openshift_hosted_logging_deploy: true
openshift_logging_curator_nodeselector: "role=infraregistry"
openshift_logging_kibana_nodeselector: "role=infraregistry"
openshift_logging_es_nodeselector: "role=infraregistry"
openshift_logging_fluentd_nodeselector: "role=infraregistry"
..l
openshift_router_selector: "role=infrarouter"
openshift_hosted_router_replicas: 3
openshift_hosted_registry_replicas: 3
openshift_master_cluster_method: native
openshift_node_local_quota_per_fsgroup: 512Mi
openshift_master_cluster_hostname: "{{ cluster_shortname }}.{{ public_hosted_zone }}"
openshift_master_cluster_public_hostname: "{{ cluster_shortname }}.{{ public_hosted_zone }}"
openshift_default_registry: "docker-registry.default.svc.cluster.local:5000"
osm_default_subdomain: "{{ wildcard_zone }}"
openshift_hostname: "{{ inventory_hostname }}"
openshift_master_default_subdomain: "{{osm_default_subdomain}}"
osm_default_node_selector: "role=app"
deployment_type: openshift-enterprise
os_sdn_network_plugin_name: "redhat/{{ openshift_sdn }}"
openshift_master_identity_providers:
…
osm_use_cockpit: true
containerized: false
openshift_hosted_registry_storage_kind: object
openshift_hosted_registry_storage_provider: s3
openshift_hosted_registry_storage_s3_accesskey: "{{ hostvars['localhost']['s3user_id'] }}"
openshift_hosted_registry_storage_s3_secretkey: "{{ hostvars['localhost']['s3user_secret'] }}"
openshift_hosted_registry_storage_s3_bucket: "{{ hostvars['localhost']['s3_bucket_name'] }}"
openshift_hosted_registry_storage_s3_region: "{{ hostvars['localhost']['region'] }}"
openshift_hosted_registry_storage_s3_chunksize: 26214400
openshift_hosted_registry_storage_s3_rootdirectory: /registry
openshift_hosted_registry_pullthrough: true
openshift_hosted_registry_acceptschema2: true
openshift_hosted_registry_enforcequota: true
@Dan see comment#5 which says to specify the selector as a dict Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3049 |