Bug 1327409 - scaleup playbook uses current oc login which may not have enough permissions
Summary: scaleup playbook uses current oc login which may not have enough permissions
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: ---
Assignee: Andrew Butcher
QA Contact: Wenkai Shi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-15 06:18 UTC by Evgheni Dereveanchin
Modified: 2019-10-10 11:53 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, the scaleup playbook would utilize the credentials of the ansible user without ensuring that the user was admin. Now the scaleup playbook will ensure that the admin login is used by using a pristine kubeconfig for all tasks ensuring that the playbook runs correctly.
Clone Of:
Environment:
Last Closed: 2016-10-03 14:52:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1983 0 normal SHIPPED_LIVE OpenShift Container Platform 3.3 atomic-openshift-utils bug fix update 2016-10-03 18:51:38 UTC

Description Evgheni Dereveanchin 2016-04-15 06:18:31 UTC
Description of problem:
currently the scaleup playbook will try to tag nodes as the user, currently logged in on the Master. If that is not an admin user, the playbook will fail.

Version-Release number of selected component (if applicable):
openshift-ansible-playbooks-3.0.47-6.git.0.7e39163.el7aos.noarch

How reproducible:
Always

Steps to Reproduce:
1. in a new environment, try to add a node using the scaleup playbook. Either prepare a new node or delete a existing one for testing purposes
2. create a new user without permissions (in my hase - with htpasswd)
 # htpasswd /etc/origin/openshift-htpasswd deleteme 
3. log in as that user into the console

# oc login
Authentication required for https://master.demo.lan:8443 (openshift)
Username: deleteme
Password: 
Login successful.

You don't have any projects. You can try to create a new project, by running

    $ oc new-project <projectname>

4. set up the hosts file with the new node as described in the solution and run the playbook:
 https://access.redhat.com/solutions/2150381

 # ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/scaleup.yml | tee ~/ansible_scale.log

Actual results:

 playbook fails at last step:

TASK: [openshift_manage_node | Wait for Node Registration] ******************** 
failed: [master.demo.lan] => (item=node3.demo.lan) => {"attempts": 20, "changed": false, "cmd": ["oc", "get", "node", "node3.demo.lan"], "delta": "0:00:00.113000", "end": "2016-04-14 18:18:33.774414", "failed": true, "item": "node3.demo.lan", "rc": 1, "start": "2016-04-14 18:18:33.661414", "stdout_lines": [], "warnings": []}
stderr: Error from server: User "deleteme" cannot get nodes at the cluster scope
msg: Task failed as maximum retries was encountered

FATAL: all hosts have already failed -- aborting


Expected results:

 Playbook tags the node successfully by using system:admin credentials

Additional info:
 As this is the very last step that fails, tagging the host(s) manually can help work around this. Alternatively - restore system:admin login on Master and re-run the playbook.

Comment 1 Evgheni Dereveanchin 2016-04-15 14:05:32 UTC
To clear possible confusion - I want the playbook to use the correct user to scale up and not care about who's logged into the oc tool on the Master.

So ideally the playbook should work even if I delete /root/.kube/config from the Master alltogether.

Comment 5 openshift-github-bot 2016-09-07 20:51:48 UTC
Commit pushed to master at https://github.com/openshift/openshift-ansible

https://github.com/openshift/openshift-ansible/commit/52eeaed447a97a85d0266036136ac611be13bbae
Merge pull request #2417 from abutcher/manage-node-kubeconfig

Bug 1327409 - scaleup playbook uses current oc login which may not have enough permissions

Comment 7 Wenkai Shi 2016-09-20 07:48:28 UTC
I have verified it with git tag "openshift-ansible-3.3.24-1", all is well.
Will change the status when get the RPM version of "openshift-ansible-3.3.24-1" in errata puddle.

Comment 8 Wenkai Shi 2016-09-21 03:25:06 UTC
I have verified it with "openshift-ansible-3.3.25-1", it works well.

Comment 10 errata-xmlrpc 2016-10-03 14:52:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1983


Note You need to log in before you can comment on or make changes to this bug.