Description of problem: 1. For allow dnsName in EgressFirewall policy, still got denied result, looks it blocked dnsName resolve 2. It also blocks pod-to-service access Version-Release number of selected component (if applicable): 4.7.0-0.nightly-2021-06-07-203428 How reproducible: Always Steps to Reproduce: 1. Create one project named test 2. Create one pod hello-pod in test 3. Create EgressFirewall in ns test oc get egressfirewall -n test -o yaml ......... spec: egress: - to: dnsName: www.test.com type: Allow - ports: - port: 80 protocol: TCP to: dnsName: yahoo.com type: Allow - to: cidrSelector: 0.0.0.0/0 type: Deny status: status: EgressFirewall Rules applied ....... 4. From hello-pod, try to access www.test.com -> Result#1 5. Create another svc/pod in the same namespace $ oc get svc -n test NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-service ClusterIP 172.30.68.11 <none> 27017/TCP 29m $ oc get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-pod 1/1 Running 0 48m 10.129.2.8 yinzhou-hui-ktgbt-rhel-0 <none> <none> test-rc-4gzjf 1/1 Running 0 30m 10.129.2.9 yinzhou-hui-ktgbt-rhel-0 <none> <none> test-rc-fkl8g 1/1 Running 0 30m 10.128.2.8 yinzhou-hui-ktgbt-worker-btlch <none> <none> 6. From hello-pod,try to access svc -->Result#2 Actual results: Result#1: # curl -v www.test.com * Rebuilt URL to: www.test.com/ * Could not resolve host: www.test.com * Closing connection 0 curl: (6) Could not resolve host: www.test.com Result#2: # curl -v 172.30.68.11:27017 * Rebuilt URL to: 172.30.68.11:27017/ * Trying 172.30.68.11... * TCP_NODELAY set Expected results: Above access should be allowed Additional info: After deleting egressfirewall, above access worked.
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the UpgradeBlocker flag has been added to this bug. It will be removed if the assessment indicates that this should not block upgrade edges. The expectation is that the assignee answers these questions. Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time What is the impact? Is it serious enough to warrant blocking edges? example: Up to 2 minute disruption in edge routing example: Up to 90seconds of API downtime example: etcd loses quorum and you have to restore from backup How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? example: Issue resolves itself after five minutes example: Admin uses oc to fix things example: Admin must SSH to hosts, restore from backups, or other non standard admin activities Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? example: No, it’s always been like this we just never noticed example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1
Answers to #comment 3: > Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking? > example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet > example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time Customers using egress firewall and upgrading to >= 4.7.14, irrespective of the platform used. > What is the impact? Is it serious enough to warrant blocking edges? > example: Up to 2 minute disruption in edge routing > example: Up to 90seconds of API downtime > example: etcd loses quorum and you have to restore from backup Pods matching egress firewalls will loose connectivity to internal cluster services (can't connect to ClusterIP) > How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)? > example: Issue resolves itself after five minutes > example: Admin uses oc to fix things > example: Admin must SSH to hosts, restore from backups, or other non standard admin activities No remediation, except deleting and not using egress firewall > Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)? > example: No, it’s always been like this we just never noticed > example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1 Yes, it's a regression from 4.7.13.
(In reply to Alexander Constantinescu from comment #4) > Customers using egress firewall and upgrading to >= 4.7.14, irrespective of > the platform used. No need to do anything about 4.7.14, since we tombstoned that one in candidate too for bug 1967614. Tombstoning 4.7.15, which we've done for this bug, is enough to keep customers on supported releases away from this regression. Adding UpdateRecommendationsBlocked since tombstoning is basically "this was blocker worthy". [1]: https://github.com/openshift/enhancements/pull/475
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.16 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2286