Bug 1392020 - npm ERR! network tunneling socket could not be established
Summary: npm ERR! network tunneling socket could not be established
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Scott Dodson
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-04 15:25 UTC by Vladislav Walek
Modified: 2017-02-01 15:07 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-01 15:07:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Vladislav Walek 2016-11-04 15:25:26 UTC
Description of problem:

If the http/https proxy is defined during ansible installation of OpenShift in master config, it is saved in as HTTP_PROXY <IPADDRESS>:<PORT>. 
Unfortunately, when you try to install new application in OpenShift the build fails as the npm repository is not accessible. 

npm http fetch GET https://registry.npmjs.org/gulp-rename/-/gulp-rename-1.2.2.tgz
npm info retry will retry, error on last attempt: Error: tunneling socket could not be established, cause=connect EINVAL 0.0.31.144:80 - Local (0.0.0.0:0)

I1103 10:10:33.107943       1 sti.go:585] ---> Setting npm http proxy to <IPADDRESS>:<PORT>
I1103 10:10:34.019025       1 sti.go:585] ---> Setting npm https proxy to <IPADDRESS>:<PORT>

This is caused, because OpenShift will pass the proxy information to deployment as http_proxy is only <IPADDRESS>:<PORT>. However, the build expects to be http_proxy=http://<IPADDRESS>:<PORT>.
The workaround would be to set the all http_proxy parameters in master config to http:// (or https:// ). 

Question is there possibility to configure it so it will be automatically passed with http.

Version-Release number of selected component (if applicable):

OpenShift Container Platform 3.X

How reproducible:

Set in master-config.yaml current parameters;

kubernetesMasterConfig:
  admissionConfig:
    pluginConfig:
      BuildDefaults:
        configuration:
          apiVersion: v1
          env:
          - name: HTTP_PROXY
            value: http://<IPADDRESS>:<PORT>
          - name: HTTPS_PROXY
            value: http://<IPADDRESS>:<PORT>
          - name: NO_PROXY
            value: .cluster.local,10.84.0.0,10.85.0.0,172.30.133.28,0.0.0.0
          - name: http_proxy
            value: http://<IPADDRESS>:<PORT>
          - name: https_proxy
            value: http://<IPADDRESS>:<PORT>
          - name: no_proxy
            value: .cluster.local,10.84.0.0,10.85.0.0,172.30.133.28,0.0.0.0
          kind: BuildDefaultsConfig
  apiServerArguments:



Actual results:


Expected results:


Additional info:

Comment 3 Ben Parees 2017-02-01 14:56:38 UTC
Scott, i think this is easily fixed but i'm surprised it hasn't been an issue for anyone else... in what format do we expect the user to provide the proxy value to ansible?  with or without the protocol?

Comment 4 Scott Dodson 2017-02-01 15:01:39 UTC
The examples all include protocols for *_http_proxy and *_https_proxy variables. I'm not sure how common it is, but QE's test environment required that HTTP_PROXY be accessed via https.

Comment 5 Ben Parees 2017-02-01 15:07:43 UTC
ok, so it sounds like user error in that the values supplied to ansible should have included the protocol in the first place.  (and prepending the protocol automatically would be invalid since we can't know if the protocol should be https or http).

Thanks Scott.


Note You need to log in before you can comment on or make changes to this bug.