Bug 1846647 - gcp-routes service too slow to not route traffic into GCP SDN
Summary: gcp-routes service too slow to not route traffic into GCP SDN
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.5
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.5.0
Assignee: Antonio Murdaca
QA Contact: Xingxing Xia
Depends On: 1845903
TreeView+ depends on / blocked
Reported: 2020-06-12 12:59 UTC by OpenShift BugZilla Robot
Modified: 2020-07-13 17:44 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2020-07-13 17:43:59 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1821 None closed [release-4.5] Bug 1846647: gcp-routes: decrease downfile poll to be faster than LB on recovery 2020-07-22 12:40:47 UTC
Red Hat Product Errata RHBA-2020:2409 None None None 2020-07-13 17:44:25 UTC

Description OpenShift BugZilla Robot 2020-06-12 12:59:06 UTC
+++ This bug was initially created as a clone of Bug #1845903 +++

When the local kube-apiserver becomes ready, the GCP LBs pick up the endpoint and route traffic to the IP. In parallel the gcp-routes service notices the local, green readyz and stops sending traffic to that LB.

The second step must happen *BEFORE* the first. Otherwise, local requests still go to the GCP LB that already picked up the endpoint. We risk to blackhole 1/3 of the requests (because GCP has no hairpinning support).

The reason is that the gcp-routes script polls every 5s (without inotify which isn't installed in the image as we know) such that we end up with 1*2s + 1.9999s + 5s until the gcp-routes script updates iptables (1.9999s because the readyz polling happens at an unfortunate time, and the 5s for the poll script to notice). Hence, 9s >> 6s of the LB. Hence, for 3s we might lose 1/3 of the requests originating from the local host.

Compare: https://github.com/openshift/installer/pull/3512/files#diff-3aaac4ae7d381237a540f05371931b76R10

Comment 4 Xingxing Xia 2020-06-19 11:39:10 UTC
Checked in 4.5.0-0.nightly-2020-06-18-210518 env (mine is IPI on GCP). On masters, check the file, the sleep time is 1 second now in the loop of detecting the down status, as the PR.
[root@xxia0619dr2-mj6qh-master-0 /]# vi /opt/libexec/openshift-gcp-routes.sh
sleep_or_watch() {
        for i in {0..5}; do
            for vip in "${!vips[@]}"; do
                if [[ "${vips[${vip}]}" != down ]] && [[ -e "${RUN_DIR}/${vip}.down" ]]; then
                    echo "new downfile detected"
                    break 2
                elif [[ "${vips[${vip}]}" = down ]] && ! [[ -e "${RUN_DIR}/${vip}.down" ]]; then
                    echo "downfile disappeared"
                    break 2
            sleep 1 # keep this small enough to not make gcp-routes slower than LBs on recovery

Checked https://thedataguy.in/where-gcp-internal-load-balancer-fails/ , understood GCP routes local request to internal LB to the same local node. Checked comment 0, got to know the gcp-routes must notice the down status change more quickly enough to "puts an iptables rule for traffic rediction in place such that local clients do not send traffic" to the internal LB as Stefan helped clarify in Slack. Based on all the info and above file content, moving to VERIFIED.

Checked the service BTW as auxiliary info:
[root@xxia0619dr2-mj6qh-master-1 /]# systemctl list-unit-files | grep gcp-routes
gcp-routes.service                                                     enabled  
openshift-gcp-routes.service                                           enabled  
[root@xxia0619dr2-mj6qh-master-1 /]# systemctl status openshift-gcp-routes.service
● openshift-gcp-routes.service - Update GCP routes for forwarded IPs.
   Loaded: loaded (/etc/systemd/system/openshift-gcp-routes.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-06-19 02:37:06 UTC; 8h ago

Comment 5 errata-xmlrpc 2020-07-13 17:43:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.