Bug 1970322 - [OVN]EgressFirewall doesn't work well as expected
Summary: [OVN]EgressFirewall doesn't work well as expected
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.z
Assignee: Alexander Constantinescu
QA Contact: huirwang
URL:
Whiteboard: UpdateRecommendationsBlocked
Depends On: 1970477
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-10 09:06 UTC by huirwang
Modified: 2023-10-11 06:20 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1970477 (view as bug list)
Environment:
Last Closed: 2021-06-15 09:28:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 570 0 None open Bug 1970322: revert 487 2021-06-10 15:00:15 UTC
Red Hat Product Errata RHSA-2021:2286 0 None None None 2021-06-15 09:29:18 UTC

Description huirwang 2021-06-10 09:06:53 UTC
Description of problem:
1. For allow dnsName in EgressFirewall policy, still got denied result, looks it blocked dnsName resolve
2. It also blocks pod-to-service access


Version-Release number of selected component (if applicable):
4.7.0-0.nightly-2021-06-07-203428 

How reproducible:
Always

Steps to Reproduce:
1. Create one project named test
2. Create one pod hello-pod in test
3. Create EgressFirewall in ns test

oc get egressfirewall -n test -o yaml
.........
spec:
    egress:
    - to:
        dnsName: www.test.com
      type: Allow
    - ports:
      - port: 80
        protocol: TCP
      to:
        dnsName: yahoo.com
      type: Allow
    - to:
        cidrSelector: 0.0.0.0/0
      type: Deny
  status:
    status: EgressFirewall Rules applied
.......

4. From hello-pod, try to access www.test.com -> Result#1
5. Create another svc/pod in the same namespace

$ oc get svc -n test
NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
test-service   ClusterIP   172.30.68.11   <none>        27017/TCP   29m

$ oc get pods -n test -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP           NODE                             NOMINATED NODE   READINESS GATES
hello-pod       1/1     Running   0          48m   10.129.2.8   yinzhou-hui-ktgbt-rhel-0         <none>           <none>
test-rc-4gzjf   1/1     Running   0          30m   10.129.2.9   yinzhou-hui-ktgbt-rhel-0         <none>           <none>
test-rc-fkl8g   1/1     Running   0          30m   10.128.2.8   yinzhou-hui-ktgbt-worker-btlch   <none>           <none>

6. From hello-pod,try to access svc -->Result#2

Actual results:
Result#1:
# curl -v www.test.com
* Rebuilt URL to: www.test.com/
* Could not resolve host: www.test.com
* Closing connection 0
curl: (6) Could not resolve host: www.test.com
Result#2:
 # curl -v 172.30.68.11:27017
* Rebuilt URL to: 172.30.68.11:27017/
*   Trying 172.30.68.11...
* TCP_NODELAY set

Expected results:
Above access should be allowed


Additional info:
After deleting egressfirewall, above access worked.

Comment 3 Scott Dodson 2021-06-10 14:24:03 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
  example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
  example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time
What is the impact?  Is it serious enough to warrant blocking edges?
  example: Up to 2 minute disruption in edge routing
  example: Up to 90seconds of API downtime
  example: etcd loses quorum and you have to restore from backup
How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
  example: Issue resolves itself after five minutes
  example: Admin uses oc to fix things
  example: Admin must SSH to hosts, restore from backups, or other non standard admin activities
Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
  example: No, it’s always been like this we just never noticed
  example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 4 Alexander Constantinescu 2021-06-10 16:38:18 UTC
Answers to #comment 3:

> Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
>  example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
>  example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

Customers using egress firewall and upgrading to >= 4.7.14, irrespective of the platform used. 

> What is the impact?  Is it serious enough to warrant blocking edges?
>  example: Up to 2 minute disruption in edge routing
>  example: Up to 90seconds of API downtime
>  example: etcd loses quorum and you have to restore from backup

Pods matching egress firewalls will loose connectivity to internal cluster services (can't connect to ClusterIP)

> How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
>  example: Issue resolves itself after five minutes
>  example: Admin uses oc to fix things
>  example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

No remediation, except deleting and not using egress firewall 

> Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
>  example: No, it’s always been like this we just never noticed
>  example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Yes, it's a regression from 4.7.13.

Comment 5 W. Trevor King 2021-06-10 16:50:04 UTC
(In reply to Alexander Constantinescu from comment #4)
> Customers using egress firewall and upgrading to >= 4.7.14, irrespective of
> the platform used. 

No need to do anything about 4.7.14, since we tombstoned that one in candidate too for bug 1967614.  Tombstoning 4.7.15, which we've done for this bug, is enough to keep customers on supported releases away from this regression.  Adding UpdateRecommendationsBlocked since tombstoning is basically "this was blocker worthy".

[1]: https://github.com/openshift/enhancements/pull/475

Comment 11 errata-xmlrpc 2021-06-15 09:28:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.16 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2286

Comment 12 eric.henriksson 2023-10-11 06:20:06 UTC
Sorry to resurrect this bug, but what do I do if I currently have this issue in Azure Red Hat OpenShift version 4.11 and 4.12?

My symptoms are the same. If I have an EgressFirewall with DNS entries, accessing those sites does not work as expected. 
If I remove the EgressFirewall or set a CIDR block of 0.0.0.0/0 with Allow, it works fine.

I have tried dnsPolicy "ClusterFirst" and "Default" on my pods with no difference. Right now it seems that either the dnsName option doesn't work at all, or my pods and the OVN pods resolve the addresses differently.

The ARO clusters have not had any changes made to DNS.
We have an on-premise cluter which is on 4.12 that still uses SDN and the EgressNetworkPolicy works with DNS names.


Note You need to log in before you can comment on or make changes to this bug.