Bug 1449911 - [3.6] Master configuration not persistent after running scaleup playbook
Summary: [3.6] Master configuration not persistent after running scaleup playbook
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.6.z
Assignee: Andrew Butcher
QA Contact: Gan Huang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-11 07:02 UTC by Jaspreet Kaur
Modified: 2017-09-05 17:42 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
If the master scaleup playbook were run without hosts in the new_masters group the playbooks could have reconfigured certain master configuration variables. The playbooks have been updated to fail immediately if there are no hosts in the new_masters group.
Clone Of:
Environment:
Last Closed: 2017-09-05 17:42:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2639 0 normal SHIPPED_LIVE OpenShift Container Platform atomic-openshift-utils bug fix and enhancement 2017-09-05 21:42:36 UTC

Description Jaspreet Kaur 2017-05-11 07:02:56 UTC
Description of problem: When running scaleup playbook with variables like as below works without issue:

[new_masters]
shift34-ha-n1.gsslab.pnq2.redhat.com

[new_nodes]
shift34-ha-n1.gsslab.pnq2.redhat.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"

However, if we run the second time the playbook :

with the above variables added to original sections as the hosts were installed the first time and now if we run the scaleup playbook with empty variables it overrides the master configuration with the default master configuration :


[masters]
shift34-ha-n1.gsslab.pnq2.redhat.com

[nodes]
shift34-ha-n1.gsslab.pnq2.redhat.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"

[new_masters]

[new_nodes]

Run scaleup playbook and this time the custom configuration in master config is gone.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results: Custom master configuration is gone after running scaleup playbook


Expected results: The configuration should be persistent unless the masters are mentioned under [new_masters] section


Additional info:

Comment 1 Scott Dodson 2017-05-11 20:56:56 UTC
Comparing the two master configuration files the only differences I see is that the named certificates are gone.

Comment 3 Andrew Butcher 2017-07-18 17:27:47 UTC
Configuration and scaleup playbooks both operate against the "oo_masters_to_config" and "oo_nodes_to_config" host groups. "oo_masters_to_config" and "oo_nodes_to_config" will contain different hosts depending on which groups exist in the inventory.

For example, if a [new_masters] group exists in the inventory (and contains hosts) then "oo_masters_to_config" will contain only the [new_masters]. Same with "oo_nodes_to_config"...

[masters]
master[1:3].example.com

[new_masters]
master4.example.com

[nodes]
master[1:3].example.com

[new_nodes]
master4.example.com

The inventory above would result in:

oo_masters_to_config = ['master4.example.com']
oo_nodes_to_config = ['master4.example.com']



When [new_masters] or [new_nodes] is empty, "oo_masters_to_config" and "oo_nodes_to_config" will contain existing [masters] and [nodes].

[masters]
master[1:4].example.com

[new_masters]

[nodes]
master[1:4].example.com

[new_nodes]

The inventory above would result in:

oo_masters_to_config = ['master1.example.com', 'master2.example.com', 'master3.example.com', 'master4.example.com']
oo_nodes_to_config = ['master1.example.com', 'master2.example.com', 'master3.example.com', 'master4.example.com']


In order to have playbooks operate on completely separate and possibly empty groups (e.g. noop scaleup when [new_masters] is empty) we will need to somehow make the play host groups a variable parameter of the configuration playbook. We cannot create additional empty host groups via add_host afaict which means that we may need to operate against the groups from inventory directly.

Comment 4 Andrew Butcher 2017-07-18 20:23:47 UTC
Alternatively, we can ensure the playbooks exit when there are no hosts to add when running scaleup.

Proposed fix: https://github.com/openshift/openshift-ansible/pull/4784

Comment 6 Gan Huang 2017-08-23 08:38:45 UTC
Verified with openshift-ansible-3.6.173.0.7-2.git.0.340aa2c.el7.noarch.rpm

Installer would fail immediately if empty new_masters/new_nodes group detected while scaling up masters or nodes.

Comment 8 errata-xmlrpc 2017-09-05 17:42:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2639


Note You need to log in before you can comment on or make changes to this bug.