Bug 1009318

Summary: Make the iptables rule to port 8775 customizable
Product: Red Hat OpenStack Reporter: Sadique Puthen <sputhenp>
Component: openstack-novaAssignee: Dan Smith <dasmith>
Status: CLOSED NOTABUG QA Contact: Ami Jeain <ajeain>
Severity: high Docs Contact:
Priority: high    
Version: 3.0CC: dallan, dasmith, hateya, jakub.chrzeszczyk, ndipanov, rrivera, yeylon
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-10-11 14:34:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sadique Puthen 2013-09-18 08:06:35 UTC
Description of problem:

Currently, port 8775 where nova-api server listens is opened from all sources by iptables. This rule is hard coded.

def metadata_accept():
    """Create the filter accept rule for metadata."""
    rule = '-s 0.0.0.0/0 -p tcp -m tcp --dport %s' % CONF.metadata_port
    if CONF.metadata_host != '127.0.0.1':
        rule += ' -d %s -j ACCEPT' % CONF.metadata_host
    else:
        rule += ' -m addrtype --dst-type LOCAL -j ACCEPT'
    iptables_manager.ipv4['filter'].add_rule('INPUT', rule)
    iptables_manager.apply()

This makes it impossible to customize it to allow it only from required sources to comply with security regulations. If customized via /etc/sysconfig/iptables, the rule will be overwritten by the hard coded rule next time nova-api server is restarted.

Below patch will read the source for iptables rule from nova.conf

# diff -up /usr/lib/python2.6/site-packages/nova/network/linux_net.py.bak /usr/lib/python2.6/site-packages/nova/network/linux_net.py
--- /usr/lib/python2.6/site-packages/nova/network/linux_net.py.bak	2013-09-18 11:46:11.703272565 +1000
+++ /usr/lib/python2.6/site-packages/nova/network/linux_net.py	2013-09-18 12:01:20.574545748 +1000
@@ -614,10 +614,11 @@ def metadata_forward():
 def metadata_accept():
     """Create the filter accept rule for metadata."""
     iptables_manager.ipv4['filter'].add_rule('INPUT',
-                                             '-s 0.0.0.0/0 -d %s '
+                                             '-s %s -d %s '
                                              '-p tcp -m tcp --dport %s '
                                              '-j ACCEPT' %
-                                             (CONF.metadata_host,
+                                             (CONF.fixed_range,
+					      CONF.metadata_host,
                                               CONF.metadata_port))
     iptables_manager.apply()

This patch basically makes sure that the metadata service is only accessible to the VMs (which should only call it from fixed range). It uses a config value from /etc/nova/nova.conf for that purpose.

Thanks for Jakub  Chrzeszczyk for providing the patch.

Comment 1 Dan Smith 2013-09-18 16:32:44 UTC
I'm concerned about trying to expose a mechanism to make that rule more restrictive than it is, because I don't think it will be easy to express the full set of potential situations without reimplementing iptables' syntax itself. The boolean situation here may be okay for many situations, but again, I think this should be handled off-box, or by adding a way to put a "provider" chain at the top of INPUT to allow site customization.

However, I think that the fact that the metadata service will respond to anyone that makes it through the existing rule is actually not a problem, as you might expect. The code looks up metadata by source address, which we should be able to trust in a nova-network scenario given the pedantic nature of the rules we add when starting a guest. So, as I understand it, another host may connect to the metadata service, but it shouldn't be able to get anything from it. IMHO, that eliminates the concern, unless I'm missing something.

The above two comments would be my feedback upstream if a patch were proposed to do this.

Comment 2 Jakub Chrzeszczyk 2013-09-19 01:59:11 UTC
(In reply to Dan Smith from comment #1)

> I'm concerned about trying to expose a mechanism to make that rule more
> restrictive than it is, because I don't think it will be easy to express the
> full set of potential situations without reimplementing iptables' syntax
> itself. The boolean situation here may be okay for many situations, but
> again, I think this should be handled off-box, or by adding a way to put a
> "provider" chain at the top of INPUT to allow site customization.

Thank you for your feedback on this Dan. I have limited understanding of OpenStack internals - while I tested my patch and it works well in our environment, I wasn't sure if it is good enough to be widely used.

I would like to make a couple of comments on your suggestions: 

1) off-box firewall: I see where you're coming from and ideally I'd like to have a border firewall in front of the compute nodes. Unfortunately our network design doesn't provide that and, as we are a part of much bigger organization who control that design, changing this would be a long and difficult process.

Another argument is: if there is local firewall functionality provided, it should be available for the administrator use and the software shouldn't be allowed to compromise this in any way. 

One use case for this will be environment like ours, with no hardware border firewalls, only local ones. 

Another use case will be high-security environment that requires both border and local firewall which provides an extra layer of security to protect the server from compromised nodes inside the organization and also acts as a countermeasure in case of security vulnerabilities on the border firewall level. 

Looking at these use cases, I believe there's a need to improve nova code, so it doesn't force making port 8775 world-open. I believe this would be of a value for a wider community of OpenStack users, not just us.

2) I'm very interested to learn more about the "provider" chain idea, I believe this will meet our requirements without raising any of the potential concerns that you brought up. We don't want to push for any particular way of implementing this - all we want is to make sure we can have full control over  iptables rules and make our firewalls secure and compliant with our policies.

> However, I think that the fact that the metadata service will respond to
> anyone that makes it through the existing rule is actually not a problem, as
> you might expect. The code looks up metadata by source address, which we
> should be able to trust in a nova-network scenario given the pedantic nature
> of the rules we add when starting a guest. So, as I understand it, another
> host may connect to the metadata service, but it shouldn't be able to get
> anything from it. IMHO, that eliminates the concern, unless I'm missing
> something.

Thank you for clearing this. It's good to know that nova is ensuring that metadata service is only available to the VMs, I agree that this makes the world-open port less of an issue. Despite that, I would like to see an option of controlling access to metadata service on the firewall level also. Application level security is important and essential, but in my opinion can't be a substitute of a network-level access control.

I believe this has a good justification from security policy point of view - many organizations rely on external port scans for ensuring security. NCI is among these. If a service is world-accessible, it's impossible to tell whether it has security implemented in the application layer, or not. Hence I believe it's reasonable to have a policy that only allows services to be world-accessible if they are require this to function properly.

Please let me know if these points sound reasonable - we're more than happy to discuss further.

Thanks for looking into this for us.

Best Regards,
Jakub

Comment 3 Dan Smith 2013-09-19 03:35:27 UTC
Just for clarity, my "provider chain" idea was just to have nova-api something into its INPUT like this before the rest of the normal rules:

  iptables -N provider-rules
  iptables -I INPUT -j provider-rules

which would let you put your own policy into the provider-rules chain, such as:

  iptables -A provider-rules -s ! 192.168.1.0/24 -p tcp --dport 8775 -j DROP

However, that would get wiped each time nova-api started, requiring something else to keep that chain populated. This is the nature of how nova-network manages the iptables rules: it does the equivalent of an iptables-restore, which blows away any current rules.

The problem with using CONF.fixed_range is is that we can have many networks and we'd have to have nova-api track them to keep the rules updated. For example:

+--------------------------------------+--------------+---------+--------------------------+
| ID                                   | Name         | Status  | Networks                 |
+--------------------------------------+--------------+---------+--------------------------+
| c6c20acd-a374-46dc-8541-879e2cadf4a2 | foo          | ACTIVE  | foo=192.168.250.2        |
| 94959c36-71e1-4512-ac0c-482c5dfde593 | test-grizzly | ACTIVE  | novanetwork=192.168.32.5 |
+--------------------------------------+--------------+---------+--------------------------+

The naive rule using CONF.fixed_range would allow the novanetwork guest to access the server, but not the foo guest.

The provider chain "hack" would probably be something we could work upstream because it's minor (although it really doesn't help much). Changing how nova-network manages the iptables rules as a one-shot action, or making nova-api track the creation of new networks and keep the rules updated is, IMHO, too large of a new feature to propose, given that nova-network is soon to be deprecated.

Thinking about this more, we do have a hook mechanism that we could leverage here. We could work hook points upstream that would allow either:

1. A custom out-of-body python module that used iptables_manager to add more rules "natively"
2. A custom out-of-body python module that would call a script like /etc/nova/provider-rules.sh or something

See this: https://github.com/openstack/nova/blob/master/doc/source/devref/hooks.rst

Working the hook points upstream would be, IMHO, quite doable. Thoughts?

Comment 5 Russell Bryant 2013-10-09 17:40:42 UTC
I spoke to Dan Smith about this a bit, and he brought up that the vnet+ won't work for other libvirt backends that work with OpenStack, such as Xen or LXC.

I think the hook approach is really the best we can do here.  It would give you complete control over adding whatever is necessary for your environment.  I think anything else is going to be problematic for some type of deployment, as we've seen here after going through some other options.

Comment 6 Jakub Chrzeszczyk 2013-10-10 05:59:31 UTC
Thank you Russell. Good point about Xen/LXC, I haven't thought of that. 

Hooks it is then.

Is this functionality something that is/will shortly be available in Grizzly, or is this more likely Havana?

Comment 7 Russell Bryant 2013-10-10 14:52:27 UTC
The hooks API itself exists in grizzly.  What's missing is adding a hook in the right place.  So, we need to find the ideal place to hook in and then submit a patch upstream for it.  From there we can look at backporting it for our packages.

Comment 8 Russell Bryant 2013-10-10 20:08:06 UTC
I did some investigation today into adding a hook as we discussed, and I believe there may be a solution for this already included in Grizzly.  Take a look at these two options:

# Regular expression to match iptables rule that should always
# be on the top. (string value)
#iptables_top_regex=

# Regular expression to match iptables rule that should always
# be on the bottom. (string value)
#iptables_bottom_regex=

These options are intended to be used to allow you to match rules that you want nova to preserve.  When nova is updating iptables rules it does a save/modify/restore.  These filters are used while it is doing the modify step.  The two regexes are used for rules that you want preserved before or after the rules that nova adds.

So, it seems like this should allow configuring a node with custom rules before nova-network runs, and then nova-network should preserve them.

Can you give this a try and let us know if it is a suitable solution for you?

Comment 10 Russell Bryant 2013-10-11 14:34:00 UTC
Thanks for the detailed update!