Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1455857 - Fail to enable openshift_excluder due to upgrade node missing upgrade_nodes playbook
Fail to enable openshift_excluder due to upgrade node missing upgrade_nodes p...
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Upgrade (Show other bugs)
3.6.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Jan Chaloupka
liujia
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-05-26 06:21 EDT by liujia
Modified: 2017-08-16 15 EDT (History)
4 users (show)

See Also:
Fixed In Version: openshift-ansible-3.6.99-1.git.0.42f2439.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-10 01:25:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1716 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.6 RPM Release Advisory 2017-08-10 05:02:50 EDT

  None (edit)
Description liujia 2017-05-26 06:21:34 EDT
Description of problem:
Upgrade ocp with upgrade_control_plane.yml and upgrade_nodes.yml in two phases, openshift_excluder is not enabled after upgrade completed. From upgrade logs, it shows that all tasks are not executed in $/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml which included in /playbooks/common/openshift-cluster/upgrades/v3.6/upgrade_nodes.yml.

Version-Release number of selected component (if applicable):
atomic-openshift-utils-3.6.80-1.git.0.807fc98.el7.noarch

How reproducible:
always

Steps to Reproduce:
1. install ocp3.5 without excluders installed(enable_excluders=false)
2. upgrade ocp3.5 to ocp3.6 in separate phases
# ansible-playbook -i hosts /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml

# ansible-playbook -i hosts /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml
3.

Actual results:
Upgrade did not finish with openshift_excluder unenabled.

Expected results:
Upgrade succeed with excluders under right status.

Additional info:
Refer to logs
Comment 8 Jan Chaloupka 2017-06-08 10:08:34 EDT
I see now:

/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml has

- name: Drain and upgrade nodes
  hosts: oo_nodes_to_upgrade:!oo_masters_to_config

which means if the master is node at the same time, the play is never run. Which means the excluder is never re-enabled.
Comment 9 Jan Chaloupka 2017-06-08 10:32:54 EDT
Basically, noone should run the node upgrade play over a host that is a master. This needs to be checked before the node_upgrade.yml playbook is run.
Comment 10 Jan Chaloupka 2017-06-08 11:09:36 EDT
Upstream PR: https://github.com/openshift/openshift-ansible/pull/4393
Comment 15 liujia 2017-06-23 03:25:22 EDT
Version:
atomic-openshift-utils-3.6.121-1.git.0.ed0b72c.el7.noarch

Steps to Reproduce:
1. install ocp3.5 without excluders installed
2. upgrade ocp3.5 to ocp3.6 in separate phases
# ansible-playbook -i hosts /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_control_plane.yml

# ansible-playbook -i hosts /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml
3.

Upgrade successfully with both of excluders installed and enabled rightly.
Comment 17 errata-xmlrpc 2017-08-10 01:25:32 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1716

Note You need to log in before you can comment on or make changes to this bug.