Description of problem: The facts collection for {{ openshift.common.admin_binary }} seems to occur on the local facts for a "node" and is not using the fact of the node to which the execution of the command is delegated to. This causes upgrades to fail with mixed RHEL & Atomic environments. During execution of "playbooks/common/openshift-cluster/upgrades/v3_1_to_v3_2/upgrade.yml The failing task: - name: Mark unschedulable if host is a node command: > {{ openshift.common.admin_binary }} manage-node {{ openshift.common.hostname | lower }} --schedulable=false delegate_to: "{{ groups.oo_first_master.0 }}" when: inventory_hostname in groups.oo_nodes_to_config Observed error message is: TASK [Mark unschedulable if host is a node] ************************************ fatal: [node01.navy.eu-west-1.aws.openpaas.axa-cloud.com -> master01.navy.eu-west-1.aws.openpaas.axa-cloud.com]: FAILED! => {"changed": false, "cmd": "oadm manage-node ip-10-191-2-209.eu-west-1.compute.internal --schedulable=false", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2} Version-Release number of selected component (if applicable): 3.2.1 How reproducible: Cluster Dependent needs a mixed RHEL & Atomic environments Steps to Reproduce: 1. Deploy a mixed RHEL & Atomic environments Master (RPM) - Master (atomic) - Mastet (Atomic) N Notes 2. Upgrade from 3.1 to 3.2 Fails with: - fail: msg: This playbook requires access to Docker 1.10 or later when: g_docker_version.avail_version | default(g_docker_version.curr_version, true) | version_compare('1.10','<') Additional info: The error is seen should not be happening, on master02 and master03, which are all on atomic, because we use facts gathering [0] to determine what binary [1] and its location you should be using. [0] https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_facts/library/openshift_facts.py#L1700 [1] https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_facts/library/openshift_facts.py#L1506-L1508
*** Bug 1347181 has been marked as a duplicate of this bug. ***
FYI, this issue prevents from adding containerized nodes in pre-existing rpm env.
Proposed fix: https://github.com/openshift/openshift-ansible/pull/2845
Will this make it in to 3.4? I see the PR is merged.
Yeah it's in.
Created attachment 1228867 [details] Ansible inventory file and logs The openshift_master_certificates failed on the containerized Env.
Yeah i ran into this yesterday, i think it's unique to situations where some masters are rpm and some are containerized. I think we only fixed containerized node against rpm master.
*** Bug 1396254 has been marked as a duplicate of this bug. ***
Commit pushed to master at https://github.com/openshift/openshift-ansible https://github.com/openshift/openshift-ansible/commit/405bd70f0f94f4a45cb4b7cfc7634a82928b6b2e Merge pull request #3278 from abutcher/mixed-env Bug 1364160 - facts collection for openshift.common.admin_binary does not seem to work in mixed environments
Test pass on openshift-ansible-3.5.55.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1140
*** Bug 1387704 has been marked as a duplicate of this bug. ***