Description of problem: When running scaleup playbook with variables like as below works without issue: [new_masters] shift34-ha-n1.gsslab.pnq2.redhat.com [new_nodes] shift34-ha-n1.gsslab.pnq2.redhat.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" However, if we run the second time the playbook : with the above variables added to original sections as the hosts were installed the first time and now if we run the scaleup playbook with empty variables it overrides the master configuration with the default master configuration : [masters] shift34-ha-n1.gsslab.pnq2.redhat.com [nodes] shift34-ha-n1.gsslab.pnq2.redhat.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" [new_masters] [new_nodes] Run scaleup playbook and this time the custom configuration in master config is gone. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Custom master configuration is gone after running scaleup playbook Expected results: The configuration should be persistent unless the masters are mentioned under [new_masters] section Additional info:
Comparing the two master configuration files the only differences I see is that the named certificates are gone.
Configuration and scaleup playbooks both operate against the "oo_masters_to_config" and "oo_nodes_to_config" host groups. "oo_masters_to_config" and "oo_nodes_to_config" will contain different hosts depending on which groups exist in the inventory. For example, if a [new_masters] group exists in the inventory (and contains hosts) then "oo_masters_to_config" will contain only the [new_masters]. Same with "oo_nodes_to_config"... [masters] master[1:3].example.com [new_masters] master4.example.com [nodes] master[1:3].example.com [new_nodes] master4.example.com The inventory above would result in: oo_masters_to_config = ['master4.example.com'] oo_nodes_to_config = ['master4.example.com'] When [new_masters] or [new_nodes] is empty, "oo_masters_to_config" and "oo_nodes_to_config" will contain existing [masters] and [nodes]. [masters] master[1:4].example.com [new_masters] [nodes] master[1:4].example.com [new_nodes] The inventory above would result in: oo_masters_to_config = ['master1.example.com', 'master2.example.com', 'master3.example.com', 'master4.example.com'] oo_nodes_to_config = ['master1.example.com', 'master2.example.com', 'master3.example.com', 'master4.example.com'] In order to have playbooks operate on completely separate and possibly empty groups (e.g. noop scaleup when [new_masters] is empty) we will need to somehow make the play host groups a variable parameter of the configuration playbook. We cannot create additional empty host groups via add_host afaict which means that we may need to operate against the groups from inventory directly.
Alternatively, we can ensure the playbooks exit when there are no hosts to add when running scaleup. Proposed fix: https://github.com/openshift/openshift-ansible/pull/4784
Verified with openshift-ansible-3.6.173.0.7-2.git.0.340aa2c.el7.noarch.rpm Installer would fail immediately if empty new_masters/new_nodes group detected while scaling up masters or nodes.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2639