Bug 1389007 - Cannot upgrade multi-master cluster with Advanced installer if masters aren't also nodes
Summary: Cannot upgrade multi-master cluster with Advanced installer if masters aren't...
Keywords:
Status: CLOSED EOL
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Vikram Goyal
QA Contact: Vikram Goyal
Vikram Goyal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-26 16:21 UTC by Natale Vinto
Modified: 2019-08-10 06:45 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-10 06:45:36 UTC
Target Upstream Version:


Attachments (Terms of Use)
inventory file (806 bytes, text/plain)
2016-10-26 16:21 UTC, Natale Vinto
no flags Details

Description Natale Vinto 2016-10-26 16:21:18 UTC
Created attachment 1214347 [details]
inventory file

Description of problem:

Using Automated In-Place Upgrade with Ansible, the upgrade of a multi-master cluster to OSCP 3.3 fails at "Restart node service" Task maybe because it expects a master to be also a node then it tries to restart atomic-openshift-node and openvswitch service, as pointed also by the documentation:
https://docs.openshift.com/container-platform/3.3/install_config/upgrading/manual_upgrades.html#upgrading-masters

"Because masters also have node components running on them in order to be configured as part of the OpenShift SDN, restart the atomic-openshift-node and openvswitch services"


Version-Release number of selected component (if applicable):

3.2
3.3


How reproducible:


Steps to Reproduce:
1. Install OSE 3.2 on RHEL 7
2. Have an inventory where masters aren't nodes, like the one attached
3. Follow Automated In-Place Upgrade and upgrade to version 3.3 with Ansible upgrade playbook
4. ansible-playbook -i hosts /usr/share/ansible/openshift ansible/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml


Actual results:

TASK [Restart node service] ****************************************************
fatal: [openshift1.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service \"'atomic-openshift-node'\": "}
fatal: [openshift3.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service \"'atomic-openshift-node'\": "}
fatal: [openshift2.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service \"'atomic-openshift-node'\": "}
to retry, use: --limit
@/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.retry

Expected results:

failed=0


Additional info:

Attached inventory file used

Comment 1 Scott Dodson 2016-10-27 13:13:33 UTC
Per the documentation we require that masters are nodes for many core OCP features. Setting this low priority.

Comment 2 Natale Vinto 2016-10-27 14:15:19 UTC
Hello, I see, it is suggested in fact into documentation [1], but I suggest then to explicitly mention that master *must* be also node otherwise installation through Ansible inventory modification could be conceptually wrong and not upgradable, while practically working.

Thanks

[1] https://docs.openshift.com/container-platform/3.3/install_config/install/advanced_install.html#multiple-masters


Note You need to log in before you can comment on or make changes to this bug.