Bug 1904010 - Ingress controller incorrectly routes traffic to non-ready pods/backends.
Summary: Ingress controller incorrectly routes traffic to non-ready pods/backends.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: All
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.6.z
Assignee: Andrew McDermott
QA Contact: Arvind iyengar
URL:
Whiteboard:
: 1908290 (view as bug list)
Depends On: 1903206
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-03 11:27 UTC by OpenShift BugZilla Robot
Modified: 2022-10-20 05:49 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-14 13:51:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift router pull 230 0 None closed [release-4.6] Bug 1904010: Check Ready condition status in Endpointslices 2021-02-20 08:41:23 UTC
Github openshift router pull 232 0 None closed [release-4.6] Bug 1904010: Add unit tests to verify NotReadyAddresses in EndpointSlices 2021-02-20 08:41:23 UTC
Red Hat Product Errata RHSA-2020:5259 0 None None None 2020-12-14 13:51:41 UTC

Description OpenShift BugZilla Robot 2020-12-03 11:27:39 UTC
+++ This bug was initially created as a clone of Bug #1903206 +++

Description of problem:
We are using a StatefulSet or Deployment with two replicas, with a ClusterIP service, and a Route.  One of the Pods is alive and ready.  The other Pod is alive but not ready.  Before OCP 4.6, network traffic was correctly routed to only the ready Pod.  In OCP 4.6, it appears that either Pod can receive traffic through the Route.  Traffic from within the cluster which uses the ClusterIP service seems to be handled correctly, leading us to think this is an issue with the Router (i.e. HAProxy).

We have seen the incorrect behaviour on OCP 4.6.1, 4.6.3 and 4.6.4.  We have seen the correct behaviour on OCP 4.5.17.



Version-Release number of selected component (if applicable):
OCP 4.6.1, 4.6.3 and 4.6.4

How reproducible:


Steps to Reproduce:
## Re-create steps

Create the following deployment, service, route (on 4.6 to recreate problem, on 4.5 to show it works as expected):

### Deployment

Note that the readiness probe here will not pass.
```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginxdemoshello-deployment
  labels:
    app: nginxdemoshello
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginxdemoshello
  template:
    metadata:
      labels:
        app: nginxdemoshello
    spec:
      containers:
      - name: liveness
        image: nginxdemos/nginx-hello:plain-text
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/ready
          initialDelaySeconds: 5
          periodSeconds: 5
```

### Service
```
apiVersion: v1
kind: Service
metadata:
  name: nginxdemoshello-service
spec:
  selector:
    app: nginxdemoshello
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
```

### Route
NOTE: Update the host with the cluster specific domain:
```
kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: nginxdemoshello-route
spec:
  host: nginxdemoshello-drb-recreate.apps.<cluster specific domain>
  to:
    kind: Service
    name: nginxdemoshello-service
    weight: 100
  port:
    targetPort: 8080
  wildcardPolicy: None
```
You will now have 2 pods running but not in ready state, 1 service serving them, and a route pointing at the service.


### Steps
Create a basic pod and exec in:
```
oc run -it --rm --restart=Never ubi --image=ubi8/ubi sh
```
Within this pod run:

```
curl <pod1ip>:8080
curl <pod2ip>:8080
curl <service cluster ip>:8080
```

Note as expected the 2 pods return successfully (note that they include their own host ip in the response) and the service correctly fails, as no pods are ready.

From outside OCP (i.e. your laptop terminal) run:

```
curl <route host>
```

In some runs, we have seen the issue recreated at this point - despite the fact that the 2 pods are not ready, and the service is correctly not directing to either pod, the route may be directing to a pod, and you get back the response. If you have recreated the issue, you can not that the haproxy.conf file in the haproxy pods in openshift-ingress namespace contains both pods in the app level backends section, when it should be empty.

If the route correctly failed, you will have got an 'Application is not available' page response which is correct, as there are no ready pods.

Confirm the 2 pods are still not ready.

```
oc get pods -o wide
```
Now run the following (updating the pod-name to match the first of your pods) to make one of the pods ready:

```
oc exec nginxdemoshello-deployment-<pod1specific> -- touch /tmp/ready
```
Confirm that shortly after the changed pod has become ready:

```
oc get pods -o wide
```
Repeat the curls against the 2 pods and the service(from ubi pod), and the route(from laptop)

On 4.6 you may now find that the wrong pod is being serviced by the route - it is the not ready pod that responds. If so, you have recreated the issue. Optionally, you can view the haconfig.conf file as above to see it contains the not ready pod.

If the correct ready pod responded, run the following commands(update the pod name as specified, noting they run against different pods) to switch which pod is ready:

```
oc exec nginxdemoshello-deployment-<pod1specific> -- rm /tmp/ready
oc exec nginxdemoshello-deployment-<pod2specific> -- touch /tmp/ready
```
Repeat the curls. Hopefully now you have recreated the issue on 4.6.
On 4.5 the route will behave as expected in all cases.

Alternatively, if this still did not recreate the issue (both pods are in the haconfig.conf so its feasible that the correct pod always replies, and I have seen neither pod respond after step 1 though its possible i didnt give it long enough), you can confirm there is a problem by viewing the /var/lib/haproxy/conf/haproxy.conf file on the pods in openshift-ingress. You will note that despite the fact that 1 pod in not ready, it list both pods under the app level backends. On 4.5 you can note that only 1 will be shown.

Actual results:
Traffic is directed to both ready and non-ready pods

Expected results:
Traffic directed only to ready pods. 

Additional info:
OCP 4.6 on AWS and private cloud.

--- Additional comment from rcarrier on 2020-12-01 17:13:22 UTC ---

Hello Team,

Could you please prioritize this bugzilla, because of the high impact Customer showed us and also this case has high visibility to the management on both sides?

Thanks in advance for your efforts and support.

Kind regards,
Roberto Carrieri
Escalation Manager
Customer Experience & Engagement
Mobile: +420.702.269.469

--- Additional comment from amcdermo on 2020-12-01 17:20:02 UTC ---

Will look into this immediately.

--- Additional comment from arthur.barr.com on 2020-12-02 11:41:22 UTC ---

Is there any update on this issue, please?  Have you managed to reproduce it?  Very happy to perform additional diagnostics, but hopefully you can re-create based on the above.

I'm a little concerned that the "Target Release" has been set to 4.7.0, as we really need to see a fix on OCP 4.6.x, as this appears to be regressed/changed behaviour.

--- Additional comment from amcdermo on 2020-12-02 11:45:20 UTC ---

I can reproduce this. This was broken by the switch to endpointslices https://github.com/openshift/router/pull/154 which happened in 4.6.

Investigating a fix and will then back port to 4.6.

--- Additional comment from amcdermo on 2020-12-02 11:47:50 UTC ---

(In reply to Arthur Barr from comment #3)
> Is there any update on this issue, please?  Have you managed to reproduce
> it?  Very happy to perform additional diagnostics, but hopefully you can
> re-create based on the above.
> 
> I'm a little concerned that the "Target Release" has been set to 4.7.0, as
> we really need to see a fix on OCP 4.6.x, as this appears to be
> regressed/changed behaviour.

The procedure would mean that we first make the fix in 4.7 and then backport to 4.6.
I plan to have a fix up for review for 4.7 today.

--- Additional comment from arthur.barr.com on 2020-12-02 12:10:11 UTC ---

Thanks very much for the update.  Assuming this fix is accepted for 4.7, can you give any indication of a timeline for a fix on 4.6?  Any information would be appreciated.

--- Additional comment from amcdermo on 2020-12-02 12:29:34 UTC ---

(In reply to Arthur Barr from comment #6)
> Thanks very much for the update.  Assuming this fix is accepted for 4.7, can
> you give any indication of a timeline for a fix on 4.6?  Any information
> would be appreciated.

I just POSTed the PR: https://github.com/openshift/router/pull/229

If this gets reviewed and merged into 4.7 today then I can start the cherry-pick for 4.6.
Once picked for 4.6 that needs approval for a 4.6.z stream which may happen tomorrow. Failing
that it would be end of next week. Once it is merged into 4.7 I can give a better estimate.

Comment 1 Andrew McDermott 2020-12-04 17:07:43 UTC
Tagging with UpcomingSprint while investigation is either ongoing or
pending. Will be considered for earlier release versions when
diagnosed and resolved.

Comment 2 Andrew McDermott 2020-12-04 17:55:59 UTC
Moving this back to POST: https://bugzilla.redhat.com/show_bug.cgi?id=1903206#c9

Comment 3 Andrew McDermott 2020-12-04 17:57:38 UTC
Moving this back to POST: https://bugzilla.redhat.com/show_bug.cgi?id=1903206#c9
 (and this time actually moving the state).

Comment 6 Arvind iyengar 2020-12-08 11:36:43 UTC
The PR merge made into the "4.6.0-0.nightly-2020-12-05-144004" release version. With this payload, it is noted that the fix effectively resolves the problem where the PODs when in a "Not ready" state, the haproxy configuration has an empty backend pool and curl to the external route fails as expected. When one or all the pods are available and in a ready state, The Haproxy backend pool gets populated with the entries of the ready pod, and the external route traffic is sent to the specifically ready pods only:

* With no pods in the "ready" state: 
-------
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-12-05-144004   True        False         14m     Cluster version is 4.6.0-0.nightly-2020-12-05-144004

$ oc create -f nginx-demoshell-deployment.yaml 
deployment.apps/nginxdemoshello-deployment created

$ oc create -f nginx-demoshell-service.yaml 
service/nginxdemoshello-service created

$ oc create -f nginx-demoshell-route.yaml 
route.route.openshift.io/nginxdemoshello-route created

$ oc get pods
NAME                                              READY   STATUS    RESTARTS   AGE
pod/nginxdemoshello-deployment-5b46f96478-557fh   0/1     Running   0          83s
pod/nginxdemoshello-deployment-5b46f96478-kdzcr   0/1     Running   0          83s


$ oc -n openshift-ingress exec router-default-6557c6f85f-9gbxx -- cat haproxy.config | grep -A15 -i "nginxdemoshello"
backend be_http:test1:nginxdemoshello-route
  mode http
  option redispatch
  option forwardfor
  balance leastconn

  timeout check 5000ms
  http-request add-header X-Forwarded-Host %[req.hdr(host)]
  http-request add-header X-Forwarded-Port %[dst_port]
  http-request add-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request add-header X-Forwarded-Proto https if { ssl_fc }
  http-request add-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
  http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)]
  cookie 1384d216b7b1811db4625b94ff95ea56 insert indirect nocache httponly

$ curl nginxdemoshello-drb-test1.apps.aiyengar-oc46-1904010-patched.qe.devcluster.openshift.com -I
HTTP/1.0 503 Service Unavailable
Pragma: no-cache
Cache-Control: private, max-age=0, no-cache, no-store
Connection: close
Content-Type: text/html
-------

* With one pod in "ready" state:
------
$ oc exec nginxdemoshello-deployment-5b46f96478-557fh  -- touch /tmp/ready

$ oc get pods -o wide
NAME                                          READY   STATUS    RESTARTS   AGE     IP            NODE                                         NOMINATED NODE   READINESS GATES
nginxdemoshello-deployment-5b46f96478-557fh   1/1     Running   0          5m42s   10.129.2.15   ip-10-0-179-34.us-east-2.compute.internal    <none>           <none>
nginxdemoshello-deployment-5b46f96478-kdzcr   0/1     Running   0          5m42s   10.128.2.8    ip-10-0-154-227.us-east-2.compute.internal   <none>           <none>

$ oc -n openshift-ingress exec router-default-6557c6f85f-9gbxx -- cat haproxy.config | grep -A15 -i "nginxdemoshello"
backend be_http:test1:nginxdemoshello-route
  mode http
  option redispatch
  option forwardfor
  balance leastconn

  timeout check 5000ms
  http-request add-header X-Forwarded-Host %[req.hdr(host)]
  http-request add-header X-Forwarded-Port %[dst_port]
  http-request add-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request add-header X-Forwarded-Proto https if { ssl_fc }
  http-request add-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
  http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)]
  cookie 1384d216b7b1811db4625b94ff95ea56 insert indirect nocache httponly
  server pod:nginxdemoshello-deployment-5b46f96478-557fh:nginxdemoshello-service::10.129.2.15:8080 10.129.2.15:8080 cookie e619f7d64d6550afe74f2856a2fb035a weight 256


$ curl nginxdemoshello-drb-test1.apps.aiyengar-oc46-1904010-patched.qe.devcluster.openshift.com -I                   
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 08 Dec 2020 11:30:54 GMT
Content-Type: text/plain
Content-Length: 175
Expires: Tue, 08 Dec 2020 11:30:53 GMT
Cache-Control: no-cache
Set-Cookie: 1384d216b7b1811db4625b94ff95ea56=e619f7d64d6550afe74f2856a2fb035a; path=/; HttpOnly
------

Comment 8 errata-xmlrpc 2020-12-14 13:51:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.6.8 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5259

Comment 9 Andrew McDermott 2020-12-17 17:41:42 UTC
*** Bug 1908290 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.