@scott, I hit similar issue when Drain Node for Kubelet upgrade. I think there should be similar issue with import template, redeploy-certificates, deploy logging and metrics and etc. If we fix all of them, QE need a regression testing. Reproduce steps: 1. oc login as a normal user 2. run upgrade playbook. TASK [Mark node unschedulable] ************************************************* task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml:264 changed: [openshift-214.lab.eng.nay.redhat.com -> openshift-214.lab.eng.nay.redhat.com] => { "attempts": 1, "changed": true, "results": { "cmd": "/usr/bin/oc adm manage-node openshift-214.lab.eng.nay.redhat.com --schedulable=False", "nodes": [ { "name": "openshift-214.lab.eng.nay.redhat.com", "schedulable": false } ], "results": "NAME STATUS AGE\nopenshift-214.lab.eng.nay.redhat.com Ready,SchedulingDisabled 85d\n", "returncode": 0 }, "state": "present" } TASK [Drain Node for Kubelet upgrade] ****************************************** task path: /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/upgrade_control_plane.yml:274 fatal: [openshift-214.lab.eng.nay.redhat.com -> openshift-214.lab.eng.nay.redhat.com]: FAILED! => { "changed": true, "cmd": [ "oadm", "drain", "openshift-214.lab.eng.nay.redhat.com", "--force", "--delete-local-data", "--ignore-daemonsets" ], "delta": "0:00:00.307291", "end": "2017-07-13 02:50:34.099727", "failed": true, "rc": 1, "start": "2017-07-13 02:50:33.792436", "warnings": [] } STDERR: Error from server (Forbidden): User "anli" cannot get nodes at the cluster scope
Verified and pass with openshift-ansible-3.5.99
Drain node are in upgrade_control_plane.yml, upgrade_nodes.yml and docker_upgrade.yml. The admin kubeconfig also need to be added to [1] [2], so re-open this bug. [1]playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml [2]playbooks/byo/openshift-cluster/upgrades/docker/docker_upgrade.yml
Version: atomic-openshift-utils-3.5.101-1.git.0.0107544.el7.noarch Steps: 1, Install ocp3.4 2, oc login with non system:admin user on master host ... current-context: /x.x.x.x:8443/jliu ... 3, upgrade 3.4 to 3.5 #upgrade_control_plane.yml #upgrade_nodes.yml Upgrade succeed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1810