Bug 1309319 - Default security group allows all IPv6 traffic to tenant instances
Default security group allows all IPv6 traffic to tenant instances
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
high Severity high
: async
: 7.0 (Kilo)
Assigned To: Dan Sneddon
Marius Cornea
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2016-02-17 08:12 EST by Marius Cornea
Modified: 2016-04-26 17:54 EDT (History)
13 users (show)

See Also:
Fixed In Version: openstack-tripleo-heat-templates-0.8.6-122.el7ost
Doc Type: Bug Fix
Doc Text:
This update resolves an issue where all IPv6 network traffic to tenant instances would be allowed under certain circumstances. This was caused by the Neutron Open vSwitch agent incorrectly deactivating IPv6 security groups when IPv6 is disabled by default because it would assume that IPv6 would be disabled in all locations.
Story Points: ---
Clone Of:
Last Closed: 2016-03-09 15:01:37 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Marius Cornea 2016-02-17 08:12:37 EST
Description of problem:

Deployment with SSL+IPv6 with 3 ctrls, 1 compute, 3 ceph nodes and vlan provider networking. The default security group seems to allow all IPv6 traffic for instances connected to a dual stack vlan network.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Deploy overcloud
export THT=/home/stack/templates/my-overcloud 
openstack overcloud deploy --templates $THT \
-e $THT/environments/network-isolation-v6-storagev4.yaml \
-e $THT/environments/net-single-nic-with-vlans-v6.yaml \
-e /home/stack/templates/network-environment-v6.yaml \
-e ~/templates/enable-tls.yaml \
-e ~/templates/inject-trust-anchor.yaml \
-e ~/templates/ceph.yaml \
-e ~/templates/firstboot-environment.yaml \
--control-scale 3 \
--compute-scale 1 \
--ceph-storage-scale 3 \
--neutron-disable-tunneling \
--neutron-network-type vlan \
--neutron-network-vlan-ranges datacentre:1000:1100 \
--libvirt-type qemu \
--ntp-server clock.redhat.com \
--timeout 180

2. neutron net-create provider-1000 --shared  --provider:physical_network datacentre --provider:network_type vlan --provider:segmentation_id 1000
neutron subnet-create provider-1000 --name provider-1000-subnet-ipv4 --gateway
neutron subnet-create provider-1000 2001:db1::/64 --name provider-1000-subnet-ipv6 --gateway 2001:db1::1 --ipv6-address-mode slaac --ip-version 6

The RA source for vlan1000 is on the undercloud:
interface vlan1000 { 
        AdvSendAdvert on;
        MinRtrAdvInterval 3; 
        MaxRtrAdvInterval 10;
        prefix 2001:db1::/64  { 
                AdvOnLink on; 
                AdvAutonomous on; 
                AdvRouterAddr on; 

3. Spawn an instance connected to provider-1000

Actual results:
vm01 | ACTIVE | -          | Running     | provider-1000=2001:db1::f816:3eff:fe4a:346b,

ssh fedora@ -> times out 
ssh fedora@2001:db1::f816:3eff:fe4a:346b 'cat /etc/hostname'

Expected results:
I'd expect the default security group to block all incoming traffic for both ipv4 and ipv6.
Comment 3 Angus Thomas 2016-02-17 11:53:19 EST
Operators can work around this by assigning a security group when they're setting up their provider networks after the deployment has completed. 

Maybe we need doctext in 7.3 to tell people to do that. 

I don't think this needs to block 7.3, though, because it is easily remedied by Operators.
Comment 7 Assaf Muller 2016-02-17 14:38:02 EST
I did some initial testing. I wanted to see if this reproduces on master. I used a devstack VM, and performed the following test:

1) Create two neutron networks, each with ipv4 and ipv6 subnet (slaac/slaac, so the router is a Neutron router and not a physical router, which is different than what Marius did but should not affect the results of the test).
2) Create two security groups, add rules from default security group that allows all incoming traffic from the same security group
3) Create an instance on the first network with the first security group, second instance on second network with second security group
3) From one instance, ping the other via ipv4. This is blocked successfully, unless I add a rule that allows incoming traffic from the other security group.
4) From one instance, ping the other via ipv6. Same results as ipv4. Blocked successfully, unless I explicitly allow incoming traffic from the other security group.

I conclude that this doesn't reproduce on master. It could be because this is an issue that was fixed between OSP 7 and master, or because of differences between OSP-d and devstack, or differences in the scenario described by Marius and myself.

@Marius, can I have credentials to an environment that reproduces the issue?
Comment 11 Assaf Muller 2016-02-17 15:07:13 EST
Shifting to Neutron.
Comment 12 Assaf Muller 2016-02-17 15:32:47 EST
Back to Director...

I noticed that 'ip6tables -L' is completely empty. Not good.

The Neutron OVS agent applies ipv6 iptables rules only if IPv6 is enabled on the machine, and it checks via:
cat /proc/sys/net/ipv6/conf/default/disable_ipv6

And on the compute node in Marius' setup that value was '1', which means that IPv6 is disabled.

I'm not sure what is setting that value or why, but it's not Neutron. It's the deployment tool's responsibility to decide on Neutron's behalf if IPv6 should be enabled or disabled.
Comment 14 Sridhar Gaddam 2016-02-18 02:24:57 EST
Agree with @Assaf's analysis. Neutron looks at the flag "net.ipv6.conf.default.disable_ipv6" and uses this info for SecurityGroups and IPv6 forwarding inside the namespace.
Comment 19 errata-xmlrpc 2016-03-09 15:01:37 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.