Bug 1762300

Summary: [4.1] backport kube-proxy fix for spurious connection resets
Product: OpenShift Container Platform Reporter: Dan Winship <danw>
Component: NetworkingAssignee: Dan Winship <danw>
Networking sub component: openshift-sdn QA Contact: zhaozhanqi <zzhao>
Status: CLOSED WONTFIX Docs Contact:
Severity: unspecified    
Priority: unspecified CC: zzhao
Version: 3.11.0   
Target Milestone: ---   
Target Release: 4.1.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1762298 Environment:
Last Closed: 2020-01-20 18:16:06 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1762298    
Bug Blocks:    

Description Dan Winship 2019-10-16 13:07:21 UTC
+++ This bug was initially created as a clone of Bug #1762298 +++

https://github.com/kubernetes/kubernetes/pull/74840 fixed a problem in kube-proxy where if certain TCP ACKs got lost (eg because there's a lot of traffic and so some packets are getting dropped) then the connection might be spuriously closed when kube-proxy received an unexpected retransmit.

There's not an easy way to test this but it's a simple patch, and it appears to have fixed the problem for the original filers, and the fix went into kube 1.15 and hasn't caused problems for anyone else, and now we have a customer who would like it backported.