Bug 1461477

Summary: Port-forwarding requires services bound to 127.0.0.1 or 0.0.0.0/0 (*)
Product: OpenShift Container Platform Reporter: Brian J. Beaudoin <bbeaudoi>
Component: NodeAssignee: Derek Carr <decarr>
Status: CLOSED CURRENTRELEASE QA Contact: Xiaoli Tian <xtian>
Severity: low Docs Contact:
Priority: unspecified    
Version: 3.5.1CC: aos-bugs, gblomqui, jokerman, mmccomas
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-03 18:16:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Brian J. Beaudoin 2017-06-14 14:35:55 UTC
Description of problem:

The OpenShift documentation for port-fowarding does not note the limitation that a service within a pod must listen to 0.0.0.0/0 (or at least 'lo' or 127.0.0.1) in order for port-forwarding to function properly. Pods, services, and routes are not affected.

https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/port_forwarding.html


Version-Release number of selected component (if applicable):

Although it is assumed services should listen to all interfaces, many applications may be configured to listen to interfaces by name (eth0, for example). It is suggested in the Kubernetes API that 0.0.0.0/0 must be used but this is not a requirement for pods, services, and routes to function.

https://docs.openshift.com/container-platform/3.5/rest_api/kubernetes_v1.html#v1-container


How reproducible:

Launch a service within a container that takes a named interface as an argument for an address to bind a port to (e.g. eth0)

Comment 1 Brian J. Beaudoin 2017-06-14 14:43:28 UTC
This is seen when using `oc port-forward` when a container's exposed service does not listen to all interfaces:

[user@host ~]$ oc port-forward example-pod 5000:8001
Forwarding from 127.0.0.1:5000 -> 8001
Handling connection for 5000
E0612 13:08:03.098455   15544 portforward.go:329] an error occurred forwarding 5000 -> 8001: error forwarding port 8001 to pod example-pod, uid : exit status 1: 2017/06/12 13:08:03 socat[101965] E connect(3, AF=2 127.0.0.1:8001, 16): Connection refused

----------

Within the pod/container, the 'curl' command can be used to show the service is listening to eth0 but not lo.

[user@host ~]$ oc rsh example-pod
sh-4.2$ curl -ik --head https://10.1.2.34:8001/path/to/service
HTTP/1.1 200 OK
Date: Wed, 14 Jun 2017 13:51:40 GMT
Transfer-Encoding: chunked

sh-4.2$ curl -ik --head https://127.0.0.1:8001/path/to/service
curl: (7) Failed connect to 127.0.0.1:8001; Connection refused
sh-4.2$

-----

Comment 2 Brian J. Beaudoin 2017-06-15 15:43:02 UTC
The issue was documented in Kubernetes upstream at
https://github.com/kubernetes/kubernetes/issues/29678

A proposal to add an address flag to kubectl port-forward was submitted here:
https://github.com/kubernetes/kubernetes/issues/43962

A patch to add the address flag to kubectl port-forward was submitted here:
https://github.com/kubernetes/kubernetes/pull/46517

I don't see this being added to OpenShift until it is available in Kubernetes upstream. In the meantime I am working around the issue by running netcat inside the container to forward connections as follows:

    while [ 1 ]; do
        nc -l 127.0.0.1 <service_port> -c 'nc <pod_ip> <service_port>'
    done

It's not an elegant or a robust solution, just a work-around for special use cases where port forwarding is required and the service within the container cannot be bound to 0.0.0.0/0 and does not support multiple bind interfaces.

Comment 4 Greg Blomquist 2019-07-03 18:16:59 UTC
This merged upstream in Oct 2018.  Assuming that was too late for 3.11, this probably is in 4.x.  Without a customer case, there doesn't seem to be a need to backport to 3.x.

Closing this as current release.  If this issue persists in current release, please reopen with additional information.