Description of problem: Today in ovn-k8s we create a load balancer for all services across all switches. In OpenShift some services will create endpoints on every single node (like coreDNS). As we scale out a cluster to say several hundred nodes, it means every time a pod makes a DNS query it could potentially hit any pod endpoint on any node. This becomes quite inefficient and creates a lot of east<->west traffic when a DNS endpoint resides on every node. When a load balancer is rendered by ovn-controller it creates a flow, with action to an openflow group. This group contains every possible endpoint, and one is chosen by via packet hash. OVN controller is also aware of the ports that are attached to the OVS it is managing. When ovn-controller goes to create the openflow group entries, it could check for what endpoints are local to its switch, and then give those endpoints higher weight. This will ensure that those endpoints are used more often for pods that access the load balancer local to the node. By making it more probable for local load balancer traffic to resolve local to the node, we can greatly reduce the amount of service east<->west traffic.
I think that is better to try to make this change compatible with Kubernetes Services Topologies feature, where the loadbalancer can choose between different endpoints depending if they are local or in the same cloud zone.
For kubernetes "local" traffic policy, we can just simply add a single local endpoint per GR load balancer (since GR load balancers are per node). But to satisfy local traffic policy requirement that traffic must not be SNAT'ed we need: https://bugzilla.redhat.com/show_bug.cgi?id=1927540