Bug 1272197 - Upgrading to 3.0.2.0 completely resets master-config.yml
Upgrading to 3.0.2.0 completely resets master-config.yml
Status: CLOSED NOTABUG
Product: OpenShift Container Platform
Classification: Red Hat
Component: Upgrade (Show other bugs)
3.0.0
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Jason DeTiberus
Johnny Liu
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-15 14:02 EDT by Wesley Hearn
Modified: 2015-10-26 12:42 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-26 12:42:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Wesley Hearn 2015-10-15 14:02:42 EDT
Description of problem:
I upgraded a cluster from 3.0.0.1 to 3.0.2.0 and when I restarted openshift-master it ended up resetting master-config.yml to the default(allow all auth, console/api reset to IPs instead of DNS)

Version-Release number of selected component (if applicable):


How reproducible:
openshift-master-3.0.2.0-0.git.16.4d626fe.el7ose.x86_64

Steps to Reproduce:
1. Install OpenShift 3.0.0.1
2. Setup master-config.yml
3. Install OpenShift 3.0.2.0

Actual results:
Resets any customization to master-config.yml

Expected results:
Should keep customization.

Additional info:
Comment 2 Brenton Leanhardt 2015-10-16 10:48:27 EDT
Wesley, just for clarification, did you use ansible for the upgrade or manually yum upgrade?
Comment 4 Jason DeTiberus 2015-10-26 11:28:32 EDT
Wesley,

Are you using the upgrade playbooks that Scott Dodson has been working on or are you using openshift-ansible directly (I'm guessing the latter).

The issue you are going to hit with the later is that we are applying templates for the configuration files, so if you are modifying the configs outside of ansible, they will be overwritten by ansible on subsequent runs.

What settings are you changing in the configs that are not supported  by openshift-ansible currently? It may be easier to solve your particular issue by adding support for the missing settings rather than trying to re-apply custom hand-edited (or otherwise managed a different way) settings after running ansible.


As a note, we have a longer term item to remove the use of templates for the configs to avoid this type of situation. It'll be more for the 3.2 timeframe though.
Comment 5 Wesley Hearn 2015-10-26 11:55:23 EDT
openshift-ansible directly.

We are changing everything from, port 8443 to 443, the urls used for the api and console, and finally the identity provider config.
Comment 6 Jason DeTiberus 2015-10-26 12:42:49 EDT
Closing as Not a bug since it appears the issue is related to the way that ansible was invoked and the configuration was overwritten as part of the ansible configs laying down configs that were previously manually edited.

Wesley,

Feel free to reach out to me on exactly what the final configs look like and I can help with which variables we provide in openshift-ansible to override those configs (we can open separate bugs for configuration items that are not currently override-able).

There will most likely need to be additional support added to the aws playbooks (in a similar manner to the openstack playbooks) to take into account additional parameters provided to bin/cluster to be able to pass those configs through to bin/cluster on create (subsequent updates/configs *should* use the facts persisted by openshift_facts to avoid clobbering existing settings).

Note You need to log in before you can comment on or make changes to this bug.