Bug 1389007

Summary: Cannot upgrade multi-master cluster with Advanced installer if masters aren't also nodes
Product: OpenShift Container Platform Reporter: Natale Vinto <nvinto>
Component: DocumentationAssignee: Vikram Goyal <vigoyal>
Status: CLOSED EOL QA Contact: Vikram Goyal <vigoyal>
Severity: low Docs Contact: Vikram Goyal <vigoyal>
Priority: low    
Version: 3.3.0CC: aos-bugs, bleanhar, jokerman, mmccomas
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-08-10 06:45:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
inventory file none

Description Natale Vinto 2016-10-26 16:21:18 UTC
Created attachment 1214347 [details]
inventory file

Description of problem:

Using Automated In-Place Upgrade with Ansible, the upgrade of a multi-master cluster to OSCP 3.3 fails at "Restart node service" Task maybe because it expects a master to be also a node then it tries to restart atomic-openshift-node and openvswitch service, as pointed also by the documentation:
https://docs.openshift.com/container-platform/3.3/install_config/upgrading/manual_upgrades.html#upgrading-masters

"Because masters also have node components running on them in order to be configured as part of the OpenShift SDN, restart the atomic-openshift-node and openvswitch services"


Version-Release number of selected component (if applicable):

3.2
3.3


How reproducible:


Steps to Reproduce:
1. Install OSE 3.2 on RHEL 7
2. Have an inventory where masters aren't nodes, like the one attached
3. Follow Automated In-Place Upgrade and upgrade to version 3.3 with Ansible upgrade playbook
4. ansible-playbook -i hosts /usr/share/ansible/openshift ansible/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml


Actual results:

TASK [Restart node service] ****************************************************
fatal: [openshift1.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service \"'atomic-openshift-node'\": "}
fatal: [openshift3.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service \"'atomic-openshift-node'\": "}
fatal: [openshift2.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service \"'atomic-openshift-node'\": "}
to retry, use: --limit
@/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.retry

Expected results:

failed=0


Additional info:

Attached inventory file used

Comment 1 Scott Dodson 2016-10-27 13:13:33 UTC
Per the documentation we require that masters are nodes for many core OCP features. Setting this low priority.

Comment 2 Natale Vinto 2016-10-27 14:15:19 UTC
Hello, I see, it is suggested in fact into documentation [1], but I suggest then to explicitly mention that master *must* be also node otherwise installation through Ansible inventory modification could be conceptually wrong and not upgradable, while practically working.

Thanks

[1] https://docs.openshift.com/container-platform/3.3/install_config/install/advanced_install.html#multiple-masters