Bug 1272197

Summary: Upgrading to 3.0.2.0 completely resets master-config.yml
Product: OpenShift Container Platform Reporter: Wesley Hearn <whearn>
Component: Cluster Version OperatorAssignee: Jason DeTiberus <jdetiber>
Status: CLOSED NOTABUG QA Contact: Johnny Liu <jialiu>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.0.0CC: aos-bugs, bleanhar, jokerman, mmccomas, twiest, whearn
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-26 16:42:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Wesley Hearn 2015-10-15 18:02:42 UTC
Description of problem:
I upgraded a cluster from 3.0.0.1 to 3.0.2.0 and when I restarted openshift-master it ended up resetting master-config.yml to the default(allow all auth, console/api reset to IPs instead of DNS)

Version-Release number of selected component (if applicable):


How reproducible:
openshift-master-3.0.2.0-0.git.16.4d626fe.el7ose.x86_64

Steps to Reproduce:
1. Install OpenShift 3.0.0.1
2. Setup master-config.yml
3. Install OpenShift 3.0.2.0

Actual results:
Resets any customization to master-config.yml

Expected results:
Should keep customization.

Additional info:

Comment 2 Brenton Leanhardt 2015-10-16 14:48:27 UTC
Wesley, just for clarification, did you use ansible for the upgrade or manually yum upgrade?

Comment 4 Jason DeTiberus 2015-10-26 15:28:32 UTC
Wesley,

Are you using the upgrade playbooks that Scott Dodson has been working on or are you using openshift-ansible directly (I'm guessing the latter).

The issue you are going to hit with the later is that we are applying templates for the configuration files, so if you are modifying the configs outside of ansible, they will be overwritten by ansible on subsequent runs.

What settings are you changing in the configs that are not supported  by openshift-ansible currently? It may be easier to solve your particular issue by adding support for the missing settings rather than trying to re-apply custom hand-edited (or otherwise managed a different way) settings after running ansible.


As a note, we have a longer term item to remove the use of templates for the configs to avoid this type of situation. It'll be more for the 3.2 timeframe though.

Comment 5 Wesley Hearn 2015-10-26 15:55:23 UTC
openshift-ansible directly.

We are changing everything from, port 8443 to 443, the urls used for the api and console, and finally the identity provider config.

Comment 6 Jason DeTiberus 2015-10-26 16:42:49 UTC
Closing as Not a bug since it appears the issue is related to the way that ansible was invoked and the configuration was overwritten as part of the ansible configs laying down configs that were previously manually edited.

Wesley,

Feel free to reach out to me on exactly what the final configs look like and I can help with which variables we provide in openshift-ansible to override those configs (we can open separate bugs for configuration items that are not currently override-able).

There will most likely need to be additional support added to the aws playbooks (in a similar manner to the openstack playbooks) to take into account additional parameters provided to bin/cluster to be able to pass those configs through to bin/cluster on create (subsequent updates/configs *should* use the facts persisted by openshift_facts to avoid clobbering existing settings).