Bug 1421465 - Restrict DHCP agent communication to the subnet that it serves in an automated manner
Summary: Restrict DHCP agent communication to the subnet that it serves in an automate...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 8.0 (Liberty)
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
: ---
Assignee: Assaf Muller
QA Contact: Toni Freger
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-12 13:25 UTC by bigswitch
Modified: 2023-09-14 03:53 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-03-18 11:36:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description bigswitch 2017-02-12 13:25:53 UTC
Description of problem:
DHCP agent communication need to be restricted to the subnet to which it provides DHCP service. We would like to do this in an automated manner when a network is created.

Version-Release number of selected component (if applicable):


How reproducible:
Create multiple networks and connect them to a router.  The DHCP agent of one network is accessible from another.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
One solution would be to automatically write the necessary IP table rules on the controller nodes on which the DHCP agents are brought up.

Comment 1 Assaf Muller 2017-02-12 15:22:33 UTC
Can you please describe the concern? What sort of attacks does the current situation open up?

Comment 2 Dan Sneddon 2017-02-13 17:49:52 UTC
(In reply to bigswitch from comment #0)
> Additional info:
> One solution would be to automatically write the necessary IP table rules on
> the controller nodes on which the DHCP agents are brought up.

I'm trying to figure out how this would work. DHCP requests come in to the server with a source IP address of 0.0.0.0, and a destination address of the global broadcast 255.255.255.255. Because of this, the only iptables rules which can be applied to DHCP requests is one based on MAC addresses. However, the DHCP agent won't respond to unknown MAC addresses, so it shouldn't be responding to nodes which don't have a port associated on a subnet with DHCP enabled. Also, VMs shouldn't be able to do MAC spoofing to obtain a DHCP address from another node, because the Neutron port will not forward traffic from MAC addresses other than the one assigned to the node.

So let's say you have three networks: A, B, and C, and you only want DHCP services for nodes on network A.

$ neutron net-create --provider:network_type <vlan|vxlan|etc.> --shared network-A

$ neutron net-create --provider:network_type <vlan|vxlan|etc.> --shared network-B

$ neutron net-create --provider:network_type <vlan|vxlan|etc.> --shared network-C

$ neutron subnet-create --name subnet-A --enable-dhcp=True --gateway <IP> \
  --allocation-pool start=<start_ip>,end=<end_ip> network-A <network/cidr>

$ neutron subnet-create --name subnet-B --enable-dhcp=False --gateway <IP2> network-B <network/cidr2>

$ neutron subnet-create --name subnet-C --enable-dhcp=False --gateway <IP3> network-C <network/cidr3>

$ neutron router-create router-ABC

$ neutron router-interface-add router-ABC subnet-A

$ neutron router-interface-add router-ABC subnet-B

$ neutron router-interface-add router-ABC subnet-C

(If you want to specify the router address, create a port with the desired IP and pass port=<port_ID> rather than subnet in the above statements.)

Now you can boot nodes via DHCP by simply booting a node on network A:

$ nova boot --flavor m1.tiny --image fedora --nic net-id=<Foo_UUID>

Or you can boot nodes via static IP by creating a port on subnets B or C where DHCP is disabled and booting with that port:

$ neutron port-create network-B \
  --fixed-ip subnet_id=subnet-B,ip_address=<fixed_IP>

$ nova boot --nic port-id=<UUID_from_port-create> --flavor m1.tiny --image fedora <instance_name>

Of course, in the latter scenario, you will be responsible for setting the IP address on the VM, perhaps by passing a --user-data script to the node as documented here:

https://ask.openstack.org/en/question/30690/add-multiple-specific-ips-to-instance/

Comment 3 bigswitch 2017-02-17 12:54:45 UTC
@Assaf Muller: There's no security concern; at times when L3 networking is provided by third party vendors, there are reasons to prevent segments from talking to each other.  In such cases, the number of policies to enforce that increases.  The policy to restrict DHCP clients to access each other (IPs communicating across the subnets) needs to be implemented on the top of the rack switches.  If we could have a single knob (to restrict DHCP endpoint communication to within its network), and if that rule could be implemented on the controller nodes, it would help in scalability.

Comment 4 bigswitch 2017-02-17 12:55:50 UTC
(In reply to Assaf Muller from comment #1)
> Can you please describe the concern? What sort of attacks does the current
> situation open up?

@Assaf Muller: There's no security concern; at times when L3 networking is provided by third party vendors, there are reasons to prevent segments from talking to each other.  In such cases, the number of policies to enforce that increases.  The policy to restrict DHCP clients to access each other (IPs communicating across the subnets) needs to be implemented on the top of the rack switches.  If we could have a single knob (to restrict DHCP endpoint communication to within its network), and if that rule could be implemented on the controller nodes, it would help in scalability.

Comment 5 Assaf Muller 2017-04-28 21:25:28 UTC
Please see comment 2.

Comment 6 Nir Yechiel 2018-03-18 11:36:01 UTC
It seems like there is no consensus about this one, and we have no requests for such a change, other than this one. 

@Big Switch folks, please re-open if you have any new info to share, or if you are willing to participate upstream to better clarify your use cases and propose patches.

Comment 7 Red Hat Bugzilla 2023-09-14 03:53:32 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.