Description of problem: currently the scaleup playbook will try to tag nodes as the user, currently logged in on the Master. If that is not an admin user, the playbook will fail. Version-Release number of selected component (if applicable): openshift-ansible-playbooks-3.0.47-6.git.0.7e39163.el7aos.noarch How reproducible: Always Steps to Reproduce: 1. in a new environment, try to add a node using the scaleup playbook. Either prepare a new node or delete a existing one for testing purposes 2. create a new user without permissions (in my hase - with htpasswd) # htpasswd /etc/origin/openshift-htpasswd deleteme 3. log in as that user into the console # oc login Authentication required for https://master.demo.lan:8443 (openshift) Username: deleteme Password: Login successful. You don't have any projects. You can try to create a new project, by running $ oc new-project <projectname> 4. set up the hosts file with the new node as described in the solution and run the playbook: https://access.redhat.com/solutions/2150381 # ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/scaleup.yml | tee ~/ansible_scale.log Actual results: playbook fails at last step: TASK: [openshift_manage_node | Wait for Node Registration] ******************** failed: [master.demo.lan] => (item=node3.demo.lan) => {"attempts": 20, "changed": false, "cmd": ["oc", "get", "node", "node3.demo.lan"], "delta": "0:00:00.113000", "end": "2016-04-14 18:18:33.774414", "failed": true, "item": "node3.demo.lan", "rc": 1, "start": "2016-04-14 18:18:33.661414", "stdout_lines": [], "warnings": []} stderr: Error from server: User "deleteme" cannot get nodes at the cluster scope msg: Task failed as maximum retries was encountered FATAL: all hosts have already failed -- aborting Expected results: Playbook tags the node successfully by using system:admin credentials Additional info: As this is the very last step that fails, tagging the host(s) manually can help work around this. Alternatively - restore system:admin login on Master and re-run the playbook.
To clear possible confusion - I want the playbook to use the correct user to scale up and not care about who's logged into the oc tool on the Master. So ideally the playbook should work even if I delete /root/.kube/config from the Master alltogether.
Commit pushed to master at https://github.com/openshift/openshift-ansible https://github.com/openshift/openshift-ansible/commit/52eeaed447a97a85d0266036136ac611be13bbae Merge pull request #2417 from abutcher/manage-node-kubeconfig Bug 1327409 - scaleup playbook uses current oc login which may not have enough permissions
I have verified it with git tag "openshift-ansible-3.3.24-1", all is well. Will change the status when get the RPM version of "openshift-ansible-3.3.24-1" in errata puddle.
I have verified it with "openshift-ansible-3.3.25-1", it works well.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1983