Bug 1963160 - Slowness in services endpoint propagation after upgrading to 3.11.404
Summary: Slowness in services endpoint propagation after upgrading to 3.11.404
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 3.11.0
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
: 3.11.z
Assignee: Antonio Ojea
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-21 15:27 UTC by Joel Rosental R.
Modified: 2024-12-20 20:06 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: On clusters with a large number of pods and network policies, the SDN pod was busy processing network policies. Consequence: The SDN pod was blocking the informers handler trying to process the network policies, thus all other functionalities were delayed, like the programming of Services iptables rules Fix: Improve the performance of the network policy processing code, using the informers cache instead of building a new one Result: The SDN pod is able to execute all its functionalities without any additional delay, also in environments with a large number of pods and network policies.
Clone Of:
Environment:
Last Closed: 2021-07-07 11:01:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 26258 0 None closed Bug 1963160: Backport Bug 1896958: NetworkPolicy performance (pod caching) #226 2021-07-01 13:49:05 UTC
Red Hat Product Errata RHBA-2021:2639 0 None None None 2021-07-07 11:02:03 UTC

Description Joel Rosental R. 2021-05-21 15:27:54 UTC
Description of problem:
Upon upgrade of OCP to version v3.11.404 service endpoint propagation started to take too much time (in some cases around 2.5 hours).

Service idling is properly disabled and iptablesSyncPeriod is set to 2h.
Nodes still in previous version (v3.11.286)

Version-Release number of selected component (if applicable):


How reproducible:
Always on customer environment.

Steps to Reproduce:
1. Deploy an application pod and a service in front of it
2. Those nodes upgraded to 3.11.404 face slowness in service propagation.
3.

Actual results:
Slowness in endpoints propagation

Expected results:
endpoint propagation should be done within short period of time.

Additional info:

Comment 2 Dan Winship 2021-05-25 15:17:38 UTC
(In reply to Joel Rosental R. from comment #0)
> Service idling is properly disabled and iptablesSyncPeriod is set to 2h.

As of 3.11.277 you should not be overriding iptablesSyncPeriod, and if you do, it can only hurt you (eg, by making updates take longer to propagate). So:

  Step 1: Unset iptablesSyncPeriod. (Just remove it from the config; the default value
  when it's left unspecified is fine now.) Restart processes as needed to propagate the
  new config

  Step 2: Make sure that whatever document that originally told you to set iptablesSyncPeriod
  has been updated to no longer recommend setting that, and file a bug report or send email or
  whatever if not.

  Step 3: Check if the problem is fixed.

If not, please get openshift-sdn logs from some node, covering the time period from when a service was updated until when the update was fully propagated.

Comment 9 Antonio Ojea 2021-05-31 11:23:17 UTC
There are different places with important information that will help us to narrow down this problem:

- kube-proxy logs: klog.V(2).InfoS("syncProxyRules complete", "elapsed", time.Since(start))
- kube-proxy prometheys exposed metrics (I don't know if those are available in 3.11 though):
  "sync_proxy_rules_duration_seconds" and "network_programming_duration_seconds"
- endpoint annotation "endpoints.kubernetes.io/last-change-trigger-time"
- time to resolve the service dnsname
- time to connect to the clusterIP

A simple way to test this is:

1. Create a single backend pod for the service

kubectl run --image=httpd test-pod --port=80

2. Wait until it is running

kubectl wait --for=condition=ready pod test-pod

3. Now is the tricky part because there are several dimensions:

We want to create the service and measure the time to access the service, but we need to take into account the time to resolve dns, the nodes from where I test and where the backend pod is running, ...
 
I will describe several steps and left to the user the exercise to automate or test the different combinations.

3.1 Create the service, this should trigger the next actions, so we need to annotate this timestamp for reference:

kubectl expose pod test-pod --port=80 --name=test-svc | awk '{ print strftime("[%Y-%m-%d %H:%M:%S]"), $0 }'
[2021-05-31 13:12:27] service/test-svc2exposed

3.2 We can get the service IP directly from the api, remember that the dns name has a different propagation delay than the kube-proxy rules, those are different components.

kubectl get service test-svc -o jsonpath='{ .spec.clusterIP }'
10.96.190.202

3.3 We can get the time when the endpoint was generated checking the well-known annotation:

kubectl get endpoint test-svc -o jsonpath='{ .metadata.annotations.endpoints\.kubernetes\.io\/last-change-trigger-time }'
2021-05-31T11:12:27Z


3.4 We should take into account that the rules that NAT the service are applied in origin, so when we create a pod to test the service or we log into a node, we should understand that we have to look in that node the kube-proxy logs. We should create the pod in advance so we don't have problems waiting for the pod to be running, i.e.

kubectl run -it --image busybox busy -- sh
echo "GET /" | nc -v test-svc 80
test-svc (10.96.121.209:80) open
HTTP/1.1 400 Bad Request
Date: Mon, 31 May 2021 11:19:39 GMT

Comment 57 zhaozhanqi 2021-07-02 08:19:28 UTC
(In reply to Antonio Ojea from comment #56)
> I've checked the pprof uploaded and I can't see any of the previous sympton,
> this is not the same environment and conditions but I think that we'll still
> see some contention.
> 
> I think that is ok to move forward, 
> thanks zzhao for checking

Thanks Antonio for helping confirm this.  move this bug to verified.

Comment 59 errata-xmlrpc 2021-07-07 11:01:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 3.11.465 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2639


Note You need to log in before you can comment on or make changes to this bug.