Bug 2072134 - Routes are not accessible within cluster from hostnet pods
Summary: Routes are not accessible within cluster from hostnet pods
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.10
Hardware: All
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.11.0
Assignee: Surya Seetharaman
QA Contact: Mike Fiedler
URL:
Whiteboard: ovn-perfscale
Depends On:
Blocks: 2073411
TreeView+ depends on / blocked
 
Reported: 2022-04-05 16:43 UTC by Murali Krishnasamy
Modified: 2022-08-10 11:03 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-10 11:03:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 1040 0 None Merged Bug 2072134: [DownstreamMerge] 4-18-22 2022-04-22 16:47:01 UTC
Github ovn-org ovn-kubernetes pull 2918 0 None Merged Fix ETP=local for host->svc traffic 2022-04-29 12:21:23 UTC

Description Murali Krishnasamy 2022-04-05 16:43:02 UTC
Description of problem:
On running a router-perf test on 4.10 nightly GCP cluster, Ingress routes are failing to respond when curl within cluster, this fails the test consistently on OVN-K cluster. The same script works on a openshiftSDN environment. 

The routes are fine when accessed externally but failing within cluster from ONLY hostnet pods. 

Version-Release number of selected component (if applicable):
4.10.0-0.nightly-2022-03-29-163038

How reproducible:
Always on GCP

Steps to Reproduce:
1. Deploy a healthy 4.10 cluster with bare minimum worker nodes(8 cpu, 32G mem) on GCP or Azure platform using OVNKubernetes CNO
2. Create a nginx pod, svc(NodePort) and a route to connect the pod. Or refer these definition templates - https://github.com/cloud-bulldozer/e2e-benchmarking/tree/master/workloads/router-perf-v2/templates
3. Try to curl the route from any hostnet pod within cluster

Actual results:
The curl get connection refused when tried within cluster and same routes works externally.

Expected results:
HTTP should get a 200 OK response within cluster like in anyother platform

Additional info:
# oc get ep -n http-scale-http http-perf-99 
NAME           ENDPOINTS          AGE
http-perf-99   10.128.4.15:8080   3d1h

# oc describe svc -n http-scale-http http-perf-99 
Name:                     http-perf-99
Namespace:                http-scale-http
Labels:                   app=http-perf
                          kube-burner-index=1
                          kube-burner-job=http-scale-http
                          kube-burner-uuid=ec86ed90-9eb6-4db1-9639-a671a6385ad4
Annotations:              <none>
Selector:                 app=nginx-99
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.30.175.114
IPs:                      172.30.175.114
Port:                     http  8080/TCP
TargetPort:               8080/TCP
NodePort:                 http  30246/TCP
Endpoints:                10.128.4.15:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

# oc describe route -n http-scale-http http-perf-99 
Name:			http-perf-99
Namespace:		http-scale-http
Created:		3 days ago
Labels:			kube-burner-index=2
			kube-burner-job=http-scale-http
			kube-burner-uuid=ec86ed90-9eb6-4db1-9639-a671a6385ad4
Annotations:		openshift.io/host.generated=true
Requested Host:		http-perf-99-http-scale-http.apps.perf-410-485d.sidq.s2.devshift.org
			   exposed on router default (host router-default.apps.perf-410-485d.sidq.s2.devshift.org) 3 days ago
Path:			<none>
TLS Termination:	<none>
Insecure Policy:	<none>
Endpoint Port:		http

Service:	http-perf-99
Weight:		100 (100%)
Endpoints:	10.128.4.15:8080

TCPdump or packet capture didn't have any trace when curl was issues, no packets coming out of the hostnet pod

Comment 2 Murali Krishnasamy 2022-04-05 23:32:10 UTC
This issue appear on Azure 4.10(4.10.0-0.nightly-2022-04-05-063640) cluster as well.

Comment 3 W. Trevor King 2022-04-07 05:18:04 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug. When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label. The expectation is that the assignee answers these questions.

Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: 4.10 OVN clusters on GCP and Azure (but not SDN clusters, or AWS clusters).  Updates from 4.9 to 4.10 risk regressing for those clusters, until this bug lands a 4.10 fix.

What is the impact? Is it serious enough to warrant blocking edges?
* example: pods with 'spec.hostNetwork: true' that try to connect to cluster Routes but which have no router pod on the same node will fail with "Connection refused".

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things, e.g. by getting a router pod on the workload node, or moving the workload pod to a node with a router pod.
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: Yes, regression introduced before 4.10 GAed via bug 2025467.  4.9 is not effected.

Comment 4 Surya Seetharaman 2022-04-07 17:43:32 UTC
Hey Trevor,

Thanks for formatting it in a simple fashion, I've never done this before so I'll try my best to answer these questions though looks like you've already got most of them correct. So thanks a lot! :)

(In reply to W. Trevor King from comment #3)
> We're asking the following questions to evaluate whether or not this bug
> warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The
> ultimate goal is to avoid delivering an update which introduces new risk or
> reduces cluster functionality in any way. Sample answers are provided to
> give more context and the ImpactStatementRequested label has been added to
> this bug. When responding, please remove ImpactStatementRequested and set
> the ImpactStatementProposed label. The expectation is that the assignee
> answers these questions.
> 
> Who is impacted? If we have to block upgrade edges based on this issue,
> which edges would need blocking?

4.10 OVN-K clusters on GCP and Azure (but not SDN clusters, or AWS
clusters).  Updates from 4.9 to 4.10 risk regressing for those clusters,
until this bug lands a 4.10 fix.

> What is the impact? Is it serious enough to warrant blocking edges?

pods with 'spec.hostNetwork: true' that try to connect to cluster
Routes but which have no router pod on the same node will fail with
"Connection refused".

 
> How involved is remediation (even moderately serious impacts might be
> acceptable if they are easy to mitigate)?

Admin uses oc to fix things, e.g. by getting a router pod on the
workload node, or moving the workload pod to a node with a router pod.


> Is this a regression (if all previous versions were also vulnerable,
> updating to the new, vulnerable version does not increase exposure)?

Yes, regression introduced before 4.10 GAed via bug 2025467.  4.9 is not effected.

Comment 5 W. Trevor King 2022-04-07 22:06:31 UTC
We expect most on-cluster containers to try to access services via their Service URI instead of trying to get to the service via the Route URI.  OVN and GCP-or-Azure doesn't seem to be a common configuration.  Comment 4's co-location mitigation gives folks an out if they do hit this issue, and folks can also reconfigure workloads to use the Service URI [1,2].  So for now, we expect to continue to recommend 4.9 to 4.10 updates, although if new information comes in, we may revisit this decision.

[1]: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services
[2]: https://docs.openshift.com/container-platform/4.10/nodes/pods/nodes-pods-secrets.html#nodes-pods-secrets-certificates-creating_nodes-pods-secrets

Comment 9 Mike Fiedler 2022-04-22 16:53:28 UTC
Verified on 4.11.0-0.nightly-2022-04-22-002610

1. create hostnetwork pod
2. create hello-openshift pod/svc and expose route
3. oc rsh to hostnetwork pod
4. curl the route successfully

Comment 11 errata-xmlrpc 2022-08-10 11:03:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.