Bug 1368131 - HTTP timeout in oc
Summary: HTTP timeout in oc
Keywords:
Status: CLOSED DUPLICATE of bug 1358393
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE
Version: 3.2.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Dan McPherson
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-18 13:39 UTC by Miheer Salunke
Modified: 2020-02-14 17:53 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-18 15:02:08 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Miheer Salunke 2016-08-18 13:39:23 UTC
1. Proposed title of this feature request
=> HTTP timeout in oc



3. What is the nature and description of the request?
=> 

Using oc f.e. for rsh or tailing logs leads to timeouts when there is noting going on (no new logs, nothing done in the shell). The error message shown is "unexpected EOF" on the client side.
 It seems related to this: http://stackoverflow.com/questions/32634980/haproxy-closes-long-living-tcp-connections-ignoring-tcp-keepalive


We have a HAproxy in front of 3 OpenShift master server, which loadbalances the connections from external clients to the masters. We already found a workaround by setting "timeout server" in HAproxy to "timeout server 61m",
Based on that Stack Overflow page, one possible solution is to disable the HAProxy request timeout, instead relying on TCP keepalives to tell us when the underlying connection has died.  But this might leave OpenShift vulnerable to something like Slowloris attacks, so perhaps we do need to introduce a HTTP-level keepalive (e.g. frames that are sent from the client to server which are ignored by OpenShift)

Is there a better way? Does the oc client support some sort of keepalive? Or is it even planned to use websockets for this? Do you have any best-practice configuration for a HAproxy in front of the OpenShift masters?
I think we already use websockets to stream the console data, otherwise there'd be huge overhead (thus poor latency) from opening new HTTP connections.  Perhaps we could leverage: http://stackoverflow.com/questions/23238319/websockets-ping-pong-why-not-tcp-keepalive

How are we as Red Hat  recommend configuring the Load Balancers to circumvent this issue? What is the best practice here?
Seems like it should be a pretty simple change to make in the client, and wouldn't be much impact (in terms of server load). 



7. Is there already an existing RFE upstream or in Red Hat Bugzilla?
=> Not known



10. List any affected packages or components.
=> oc, load balancer like HaProxy

Comment 2 Dan McPherson 2016-08-18 15:02:08 UTC

*** This bug has been marked as a duplicate of bug 1358393 ***

Comment 3 Alwyn Kik 2018-12-11 17:05:48 UTC
We are still experiencing this exact issue even when defining --request-timeout.

The fix for the "duplicate" bug fixes other (definitely related) issues, but not this one.

Our situation is *identical* to the one described as such:

We have a HAproxy in front of 3 OpenShift master server, which loadbalances the connections from external clients to the masters. We already found a workaround by setting "timeout server" in HAproxy to "timeout server 61m",


We have indeed set the 'timeout server' to a higher value in the frontend HAProxy, but it seems a HTTP keep-alive request of sorts would be a much better solution.


oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
openshift v3.11.16


Note You need to log in before you can comment on or make changes to this bug.