Bug 1370462 - atomic-openshift-install script fails with connection refused.
Summary: atomic-openshift-install script fails with connection refused.
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Tim Bielawa
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-26 12:11 UTC by Bhaskarakiran
Modified: 2016-11-23 23:13 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-31 09:14:12 UTC
Target Upstream Version:


Attachments (Terms of Use)
ansible log (2.32 MB, text/plain)
2016-08-26 12:11 UTC, Bhaskarakiran
no flags Details

Description Bhaskarakiran 2016-08-26 12:11:15 UTC
Created attachment 1194314 [details]
ansible log

Description of problem:
======================

The script fails at the below stage:

TASK [openshift_examples : Import RHEL streams] ********************************
failed: [10.70.41.151] (item=/etc/origin/examples/image-streams/image-streams-rhel7.json) => {"changed": false, "cmd": ["/usr/local/bin/oc", "create", "-n", "openshift", "-f", "/etc/origin/examples/image-streams/image-streams-rhel7.json"], "delta": "0:00:01.668502", "end": "2016-08-26 17:27:38.939282", "failed": true, "failed_when_result": true, "item": "/etc/origin/examples/image-streams/image-streams-rhel7.json", "rc": 1, "start": "2016-08-26 17:27:37.270780", "stderr": "unable to connect to a server to handle \"imagestreamlists\": Get https://dhcp41-151.lab.eng.blr.redhat.com:8443/oapi: x509: certificate signed by unknown authority", "stdout": "", "stdout_lines": [], "warnings": []}
failed: [10.70.41.151] (item=/etc/origin/examples/image-streams/dotnet_imagestreams.json) => {"changed": false, "cmd": ["/usr/local/bin/oc", "create", "-n", "openshift", "-f", "/etc/origin/examples/image-streams/dotnet_imagestreams.json"], "delta": "0:00:01.614733", "end": "2016-08-26 17:27:42.311715", "failed": true, "failed_when_result": true, "item": "/etc/origin/examples/image-streams/dotnet_imagestreams.json", "rc": 1, "start": "2016-08-26 17:27:40.696982", "stderr": "unable to connect to a server to handle \"lists\": Get https://dhcp41-151.lab.eng.blr.redhat.com:8443/api: x509: certificate signed by unknown authority", "stdout": "", "stdout_lines": [], "warnings": []}

All the machines are freshly installed with RHEL and RHAOS 3.3 repos are enabled.

Version-Release number of selected component (if applicable):
=============================================================
3.3.0.24-1

[root@dhcp43-179 ~]# rpm -qa |grep atomic
atomic-openshift-3.3.0.24-1.git.0.0bb5f8f.el7.x86_64
atomic-openshift-utils-3.3.13-1.git.0.7435ce7.el7.noarch
atomic-openshift-clients-3.3.0.24-1.git.0.0bb5f8f.el7.x86_64
[root@dhcp43-179 ~]# 


How reproducible:
=================
100%

Steps to Reproduce:
1. Install the openshift cluster with atomic-openshift-installer
2.
3.

Actual results:


Expected results:


Additional info:
ansible.log will be attached.

Comment 1 Jason DeTiberus 2016-08-26 14:38:24 UTC
Bhaskarakiran,

From looking at the log that you provided, it looks like the installation was started and stopped multiple times. Because of the error you pasted above, it appears that the kubeconfig being used by the openshift client on the master host references an old CA from one of the previous installs.

Did you run the uninstall utility between the failed install attempts? 

This can be rectified by copying /etc/origin/master/admin.kubeconfig to /root/.kube/config and if using a user other than root as the ssh user also copying it to ~<ssh user>/.kube/config 

We currently don't overwrite this file if it already exists, which is something that we are looking to address better in the 3.4 time frame.

Comment 2 Bhaskarakiran 2016-08-29 06:17:52 UTC
Hi Jason,

Yes, as the install failed, i tried uninstall script. I will check with workaround you suggested. Thanks.

Comment 3 Bhaskarakiran 2016-08-31 09:14:12 UTC
It worked. I am not seeing the same failure again. I have picked up 3.3.0.27 version and didn't see any issues. Marking this as closed for now. Thanks Jason.


Note You need to log in before you can comment on or make changes to this bug.