Bug 1298336 - Rerunning the installer fails when not in default namespace
Summary: Rerunning the installer fails when not in default namespace
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 3.3.1
Assignee: Andrew Butcher
QA Contact: Wenkai Shi
URL:
Whiteboard:
Depends On:
Blocks: 1245415
TreeView+ depends on / blocked
 
Reported: 2016-01-13 19:44 UTC by Eric Jones
Modified: 2016-10-27 16:12 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, installation would fail if the root user's kubeconfig context had been changed to a different project prior to running the installer. The installer now uses a temporary kubeconfig and ensures that the correct namespace is used for each OpenShift client operation.
Clone Of:
Environment:
OpenShift 3.x
Last Closed: 2016-10-27 16:12:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2122 0 normal SHIPPED_LIVE OpenShift Container Platform atomic-openshift-utils bug fix update 2016-10-27 20:11:30 UTC

Description Eric Jones 2016-01-13 19:44:51 UTC
Description of problem:
If a new node needs to be added to an existing cluster, the system must be in the default namespace, otherwise the installer will fail. Ansible will not find the kubernetes service that is in the default namespace, if you are not there. 

Version-Release number of selected component (if applicable):
OpenShift 3.x installer

How reproducible:
100% 

Steps to Reproduce:
1. Take existing OSE3.x cluster, and prepare a new server to be added as a new node
2. Modify the /etc/ansible/hosts file
3. Change the namespace to be in a different namespace other than default
4. Run the installer

Actual results:
failed: [master.test.com] => {"changed": true, "cmd": ["oc", "get", "-o", "template", "svc", "kubernetes", "--template={#.spec.clusterIP#}"], "delta": "0:00:00.287941", "end": "2016-01-13 13:37:37.135591", "rc": 1, "start": "2016-01-13 13:37:36.847650", "warnings": []}
2016-01-13 13:37:37,161 p=9573 u=root |  stderr: Error from server: services "kubernetes" not found

Expected results:
Installer to run without error

Additional info:
Potentially related to [0][1], but not sure, as [0] seems resolved
[0] https://github.com/openshift/openshift-ansible/pull/985
[1] https://github.com/openshift/openshift-ansible/issues/631

Comment 1 Jason DeTiberus 2016-01-13 20:25:47 UTC
Wherever we are using the client config, we really should be doing the following:
1) create a temporary directory
2) copy the admin.kubeconfig to the temp directory
3) run the commands
4) cleanup

That way we do not have a chance to end up with a kubeconfig in a bad state (other than if the user messes with the admin kubeconfig, but that is a different matter).

Comment 5 Brenton Leanhardt 2016-01-28 16:10:41 UTC
I think we should close Bug #1245415 once we can verify admins are able to change openshift_master_api_url, rerun ansible and have all the tasks still work.  I don't think we'll ever be able to update ~/.kube/config since the admin has likely made local modifications that we were overwrite.

Comment 7 openshift-github-bot 2016-09-09 16:24:55 UTC
Commit pushed to master at https://github.com/openshift/openshift-ansible

https://github.com/openshift/openshift-ansible/commit/f85dd1604e9c3d011b24654e6c40c0345e2e96bb
Merge pull request #2427 from abutcher/BZ1298336

Bug 1298336 - Rerunning the installer fails when not in default namespace

Comment 9 Wenkai Shi 2016-10-13 08:53:51 UTC
It is fix on openshift v3.3.0.35

Comment 10 Wenkai Shi 2016-10-13 09:05:58 UTC
It is fix in this version:
openshift-ansible-3.3.35-1.git.0.1be8ddc.el7.noarch

After create ocp cluster, ssh to root create and switch to a new project.Then run the scale up playbook, it succeed.

[root@master ~]# oc new-project hello-openshift
[root@master ~]# oc project hello-openshift

[root@ansible ~]# ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.ym

Comment 12 errata-xmlrpc 2016-10-27 16:12:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:2122


Note You need to log in before you can comment on or make changes to this bug.