Description of problem: If a new node needs to be added to an existing cluster, the system must be in the default namespace, otherwise the installer will fail. Ansible will not find the kubernetes service that is in the default namespace, if you are not there. Version-Release number of selected component (if applicable): OpenShift 3.x installer How reproducible: 100% Steps to Reproduce: 1. Take existing OSE3.x cluster, and prepare a new server to be added as a new node 2. Modify the /etc/ansible/hosts file 3. Change the namespace to be in a different namespace other than default 4. Run the installer Actual results: failed: [master.test.com] => {"changed": true, "cmd": ["oc", "get", "-o", "template", "svc", "kubernetes", "--template={#.spec.clusterIP#}"], "delta": "0:00:00.287941", "end": "2016-01-13 13:37:37.135591", "rc": 1, "start": "2016-01-13 13:37:36.847650", "warnings": []} 2016-01-13 13:37:37,161 p=9573 u=root | stderr: Error from server: services "kubernetes" not found Expected results: Installer to run without error Additional info: Potentially related to [0][1], but not sure, as [0] seems resolved [0] https://github.com/openshift/openshift-ansible/pull/985 [1] https://github.com/openshift/openshift-ansible/issues/631
Wherever we are using the client config, we really should be doing the following: 1) create a temporary directory 2) copy the admin.kubeconfig to the temp directory 3) run the commands 4) cleanup That way we do not have a chance to end up with a kubeconfig in a bad state (other than if the user messes with the admin kubeconfig, but that is a different matter).
I think we should close Bug #1245415 once we can verify admins are able to change openshift_master_api_url, rerun ansible and have all the tasks still work. I don't think we'll ever be able to update ~/.kube/config since the admin has likely made local modifications that we were overwrite.
Commit pushed to master at https://github.com/openshift/openshift-ansible https://github.com/openshift/openshift-ansible/commit/f85dd1604e9c3d011b24654e6c40c0345e2e96bb Merge pull request #2427 from abutcher/BZ1298336 Bug 1298336 - Rerunning the installer fails when not in default namespace
It is fix on openshift v3.3.0.35
It is fix in this version: openshift-ansible-3.3.35-1.git.0.1be8ddc.el7.noarch After create ocp cluster, ssh to root create and switch to a new project.Then run the scale up playbook, it succeed. [root@master ~]# oc new-project hello-openshift [root@master ~]# oc project hello-openshift [root@ansible ~]# ansible-playbook -i hosts -v /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.ym
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:2122