oc cluster up behind the proxy fails to start. This is a blocker for minishift behind the proxy scenario. ##### Version ``` $ ./oc version oc v3.11.0+fb68584-2 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO ``` ##### Steps To Reproduce ``` # ./oc cluster up --base-dir /var/lib/minishift/base --http-proxy <my_proxy>:3128 --no-proxy 172.17.0.0/16,172.30.1.1 --public-hostname 192.168.42.84 --routing-suffix 192.168.42.84.nip.io [...] I1015 02:21:21.483888 2044 interface.go:26] Installing "kube-proxy" ... I1015 02:21:21.484383 2044 interface.go:26] Installing "kube-dns" ... I1015 02:21:21.484390 2044 interface.go:26] Installing "openshift-service-cert-signer-operator" ... I1015 02:21:21.484396 2044 interface.go:26] Installing "openshift-apiserver" ... I1015 02:21:21.484436 2044 apply_template.go:81] Installing "openshift-apiserver" I1015 02:21:21.485131 2044 apply_template.go:81] Installing "kube-dns" I1015 02:21:21.486115 2044 apply_template.go:81] Installing "kube-proxy" I1015 02:21:21.487949 2044 apply_template.go:81] Installing "openshift-service-cert-signer-operator" I1015 02:21:26.479396 2044 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver" Error: Get https://192.168.42.84:8443/apis/apiregistration.k8s.io/v1beta1/apiservices: dial tcp 192.168.42.84:8443: connect: connection refused $ docker logs cc8535a7caa1 E1015 06:22:13.164397 1 reflector.go:136] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: Get https://192.168.42.84:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.42.84:8443: connect: connection refused E1015 06:22:13.165570 1 reflector.go:136] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: Get https://192.168.42.84:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.42.84:8443: connect: connection refused E1015 06:22:13.168362 1 reflector.go:136] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: Get https://192.168.42.84:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.42.84:8443: connect: connection refused E1015 06:22:13.168410 1 reflector.go:136] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: Get https://192.168.42.84:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 192.168.42.84:8443: connect: connection refused E1015 06:22:13.169557 1 reflector.go:136] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: Get https://192.168.42.84:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.42.84:8443: connect: connection refused ``` ##### Current Result ``` Error: Get https://192.168.42.84:8443/apis/apiregistration.k8s.io/v1beta1/apiservices: dial tcp 192.168.42.84:8443: connect: connection refused ``` ##### Expected Result Should be started successfully.
@Praveen out of curiosity, are you able to still use the workaround from https://bugzilla.redhat.com/show_bug.cgi?id=1618311#c16 with oc v1.11?
@Juan yes, I did try that workaround before creating the issue if you check the logs you see I used `--no-proxy 172.17.0.0/16` which is docker subnet bridge.
ping @Juan, any other info you need, Is there any other workaround for this issue?
@michal fojtik @juan - Looks like this is still blocker. Can you pl provide necessary info for Praveen to progress? This blocks 3.11 z stream release
Thanks Juan. Do you have a link to the PR/patch? One thing that was confusing about this was that some of the files were moved around and history lost/confused so it would be good to see the diff itself. Thx!
What is the status of this? What needs to be done, as this is currently a blocker for CDK.
Gerard, Juan is reviewing the output that Praveen collected in comment #17. When he's through, he can talk about next steps.
FWIW, as mentioned in the other BZ, this code for setting ENV vars seems to have been removed after 3.9. It's unclear if that was intentional, or moved to another library, etc.
OOps, forgot the link to HTTP_PROXY env vars: https://github.com/openshift/origin/blob/release-3.9/pkg/oc/bootstrap/docker/openshift/helper.go#L730-L759
@Nick this is the build config env var in case of proxy which is also not present in 3.10 but atleast `oc cluster up` execute without any issue for 3.10 but having this failure for 3.11
Upon testing with a new, clean environment, I can now confirm that upstream PR [1] does fix this bug as well. I will try to move that PR along. Additionally I have opened a PR to pick this fix into the release-3.11 Origin branch: https://github.com/openshift/origin/pull/21604 Praveen, could you double check in your environment that this PR works for you as well? Once you apply the PR's changes, you can follow the steps for testing kubelet changes with `oc cluster up` from comment 27. Thanks
The PR lands in oc version >= v3.11.52 which is not built yet. Will wait that build and verify then.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0024