Description of problem: DHCP agent communication need to be restricted to the subnet to which it provides DHCP service. We would like to do this in an automated manner when a network is created. Version-Release number of selected component (if applicable): How reproducible: Create multiple networks and connect them to a router. The DHCP agent of one network is accessible from another. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: One solution would be to automatically write the necessary IP table rules on the controller nodes on which the DHCP agents are brought up.
Can you please describe the concern? What sort of attacks does the current situation open up?
(In reply to bigswitch from comment #0) > Additional info: > One solution would be to automatically write the necessary IP table rules on > the controller nodes on which the DHCP agents are brought up. I'm trying to figure out how this would work. DHCP requests come in to the server with a source IP address of 0.0.0.0, and a destination address of the global broadcast 255.255.255.255. Because of this, the only iptables rules which can be applied to DHCP requests is one based on MAC addresses. However, the DHCP agent won't respond to unknown MAC addresses, so it shouldn't be responding to nodes which don't have a port associated on a subnet with DHCP enabled. Also, VMs shouldn't be able to do MAC spoofing to obtain a DHCP address from another node, because the Neutron port will not forward traffic from MAC addresses other than the one assigned to the node. So let's say you have three networks: A, B, and C, and you only want DHCP services for nodes on network A. $ neutron net-create --provider:network_type <vlan|vxlan|etc.> --shared network-A $ neutron net-create --provider:network_type <vlan|vxlan|etc.> --shared network-B $ neutron net-create --provider:network_type <vlan|vxlan|etc.> --shared network-C $ neutron subnet-create --name subnet-A --enable-dhcp=True --gateway <IP> \ --allocation-pool start=<start_ip>,end=<end_ip> network-A <network/cidr> $ neutron subnet-create --name subnet-B --enable-dhcp=False --gateway <IP2> network-B <network/cidr2> $ neutron subnet-create --name subnet-C --enable-dhcp=False --gateway <IP3> network-C <network/cidr3> $ neutron router-create router-ABC $ neutron router-interface-add router-ABC subnet-A $ neutron router-interface-add router-ABC subnet-B $ neutron router-interface-add router-ABC subnet-C (If you want to specify the router address, create a port with the desired IP and pass port=<port_ID> rather than subnet in the above statements.) Now you can boot nodes via DHCP by simply booting a node on network A: $ nova boot --flavor m1.tiny --image fedora --nic net-id=<Foo_UUID> Or you can boot nodes via static IP by creating a port on subnets B or C where DHCP is disabled and booting with that port: $ neutron port-create network-B \ --fixed-ip subnet_id=subnet-B,ip_address=<fixed_IP> $ nova boot --nic port-id=<UUID_from_port-create> --flavor m1.tiny --image fedora <instance_name> Of course, in the latter scenario, you will be responsible for setting the IP address on the VM, perhaps by passing a --user-data script to the node as documented here: https://ask.openstack.org/en/question/30690/add-multiple-specific-ips-to-instance/
@Assaf Muller: There's no security concern; at times when L3 networking is provided by third party vendors, there are reasons to prevent segments from talking to each other. In such cases, the number of policies to enforce that increases. The policy to restrict DHCP clients to access each other (IPs communicating across the subnets) needs to be implemented on the top of the rack switches. If we could have a single knob (to restrict DHCP endpoint communication to within its network), and if that rule could be implemented on the controller nodes, it would help in scalability.
(In reply to Assaf Muller from comment #1) > Can you please describe the concern? What sort of attacks does the current > situation open up? @Assaf Muller: There's no security concern; at times when L3 networking is provided by third party vendors, there are reasons to prevent segments from talking to each other. In such cases, the number of policies to enforce that increases. The policy to restrict DHCP clients to access each other (IPs communicating across the subnets) needs to be implemented on the top of the rack switches. If we could have a single knob (to restrict DHCP endpoint communication to within its network), and if that rule could be implemented on the controller nodes, it would help in scalability.
Please see comment 2.
It seems like there is no consensus about this one, and we have no requests for such a change, other than this one. @Big Switch folks, please re-open if you have any new info to share, or if you are willing to participate upstream to better clarify your use cases and propose patches.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days