Bug 1309319 - Default security group allows all IPv6 traffic to tenant instances
Summary: Default security group allows all IPv6 traffic to tenant instances
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: async
: 7.0 (Kilo)
Assignee: Dan Sneddon
QA Contact: Marius Cornea
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-17 13:12 UTC by Marius Cornea
Modified: 2016-04-26 21:54 UTC (History)
13 users (show)

Fixed In Version: openstack-tripleo-heat-templates-0.8.6-122.el7ost
Doc Type: Bug Fix
Doc Text:
This update resolves an issue where all IPv6 network traffic to tenant instances would be allowed under certain circumstances. This was caused by the Neutron Open vSwitch agent incorrectly deactivating IPv6 security groups when IPv6 is disabled by default because it would assume that IPv6 would be disabled in all locations.
Clone Of:
Environment:
Last Closed: 2016-03-09 20:01:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0424 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OSP 7 director Bug Fix Advisory 2016-03-10 00:20:23 UTC

Internal Links: 1694849

Description Marius Cornea 2016-02-17 13:12:37 UTC
Description of problem:

Deployment with SSL+IPv6 with 3 ctrls, 1 compute, 3 ceph nodes and vlan provider networking. The default security group seems to allow all IPv6 traffic for instances connected to a dual stack vlan network.

Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-0.8.6-121.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Deploy overcloud
export THT=/home/stack/templates/my-overcloud 
openstack overcloud deploy --templates $THT \
-e $THT/environments/network-isolation-v6-storagev4.yaml \
-e $THT/environments/net-single-nic-with-vlans-v6.yaml \
-e /home/stack/templates/network-environment-v6.yaml \
-e ~/templates/enable-tls.yaml \
-e ~/templates/inject-trust-anchor.yaml \
-e ~/templates/ceph.yaml \
-e ~/templates/firstboot-environment.yaml \
--control-scale 3 \
--compute-scale 1 \
--ceph-storage-scale 3 \
--neutron-disable-tunneling \
--neutron-network-type vlan \
--neutron-network-vlan-ranges datacentre:1000:1100 \
--libvirt-type qemu \
--ntp-server clock.redhat.com \
--timeout 180

2. neutron net-create provider-1000 --shared  --provider:physical_network datacentre --provider:network_type vlan --provider:segmentation_id 1000
neutron subnet-create provider-1000 10.0.0.0/24 --name provider-1000-subnet-ipv4 --gateway 10.0.0.1
neutron subnet-create provider-1000 2001:db1::/64 --name provider-1000-subnet-ipv6 --gateway 2001:db1::1 --ipv6-address-mode slaac --ip-version 6

The RA source for vlan1000 is on the undercloud:
interface vlan1000 { 
        AdvSendAdvert on;
        MinRtrAdvInterval 3; 
        MaxRtrAdvInterval 10;
        prefix 2001:db1::/64  { 
                AdvOnLink on; 
                AdvAutonomous on; 
                AdvRouterAddr on; 
        };
};

3. Spawn an instance connected to provider-1000

Actual results:
vm01 | ACTIVE | -          | Running     | provider-1000=2001:db1::f816:3eff:fe4a:346b, 10.0.0.6

ssh fedora.0.6 -> times out 
ssh fedora@2001:db1::f816:3eff:fe4a:346b 'cat /etc/hostname'
vm01.localdomain

Expected results:
I'd expect the default security group to block all incoming traffic for both ipv4 and ipv6.

Comment 3 Angus Thomas 2016-02-17 16:53:19 UTC
Operators can work around this by assigning a security group when they're setting up their provider networks after the deployment has completed. 

Maybe we need doctext in 7.3 to tell people to do that. 

I don't think this needs to block 7.3, though, because it is easily remedied by Operators.

Comment 7 Assaf Muller 2016-02-17 19:38:02 UTC
I did some initial testing. I wanted to see if this reproduces on master. I used a devstack VM, and performed the following test:

1) Create two neutron networks, each with ipv4 and ipv6 subnet (slaac/slaac, so the router is a Neutron router and not a physical router, which is different than what Marius did but should not affect the results of the test).
2) Create two security groups, add rules from default security group that allows all incoming traffic from the same security group
3) Create an instance on the first network with the first security group, second instance on second network with second security group
3) From one instance, ping the other via ipv4. This is blocked successfully, unless I add a rule that allows incoming traffic from the other security group.
4) From one instance, ping the other via ipv6. Same results as ipv4. Blocked successfully, unless I explicitly allow incoming traffic from the other security group.

I conclude that this doesn't reproduce on master. It could be because this is an issue that was fixed between OSP 7 and master, or because of differences between OSP-d and devstack, or differences in the scenario described by Marius and myself.

@Marius, can I have credentials to an environment that reproduces the issue?

Comment 11 Assaf Muller 2016-02-17 20:07:13 UTC
Shifting to Neutron.

Comment 12 Assaf Muller 2016-02-17 20:32:47 UTC
Back to Director...

I noticed that 'ip6tables -L' is completely empty. Not good.

The Neutron OVS agent applies ipv6 iptables rules only if IPv6 is enabled on the machine, and it checks via:
cat /proc/sys/net/ipv6/conf/default/disable_ipv6

And on the compute node in Marius' setup that value was '1', which means that IPv6 is disabled.

I'm not sure what is setting that value or why, but it's not Neutron. It's the deployment tool's responsibility to decide on Neutron's behalf if IPv6 should be enabled or disabled.

Comment 14 Sridhar Gaddam 2016-02-18 07:24:57 UTC
Agree with @Assaf's analysis. Neutron looks at the flag "net.ipv6.conf.default.disable_ipv6" and uses this info for SecurityGroups and IPv6 forwarding inside the namespace.

Comment 19 errata-xmlrpc 2016-03-09 20:01:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0424.html


Note You need to log in before you can comment on or make changes to this bug.