Description of problem: Bulk port creation is adding the default security group to the ports apart from the ones requested in the operation. i.e Requested bulk port creation with 3 security groups -> ports are created with those sg + the default sg. With this issue namespace isolation is not achieved in OCP on OSP deployments with Kuryr. Version-Release number of selected component (if applicable): RHOS_TRUNK-15.0-RHEL-8-20190830.n.0 How reproducible: always Steps to Reproduce: 1. Request bulk port creation with some sg in neutron 2. Check created ports are given the requested sg only Actual results: The default sg is also assigned to the created ports. >>> req = {'project_id': '46302f6a4ddc41d8b8cc8d0a29fbad9d', ... 'network_id': 'bf252d82-5601-4288-86c7-9ad8b9d877cb', ... 'admin_state_up': True, ... 'device_owner': 'luis', ... 'security_groups': ['09316b40-c075-47a7-861b-d94e2bd47be4','6fd69032-b3b4-4384-aba9-dbcff71f3478','fddc1518-352f-4037-b74b-e186573e2a58']} >>> bulk_port_rq = {'ports': [req, req]} >>> ports = neutron.create_port(bulk_port_rq).get('ports') >>> ports [{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': u'2019-09-06T10:29:44Z', u'device_owner': u'luis', u'revision_number': 1, u'port_security_enabled': True, u'fixed_ips': [{u'subnet_id': u'439fd50e-c547-438f-81c2-b3dca1e05729', u'ip_address': u'10.11.13.32'}], u'id': u'1a32142d-bb37-493a-b6c4-7956da8fd5bd', u'security_groups': [u'09316b40-c075-47a7-861b-d94e2bd47be4', u'6fd69032-b3b4-4384-aba9-dbcff71f3478', u'd20a8bf2-9b74-4c6f-9d54-4562945175e6', u'fddc1518-352f-4037-b74b-e186573e2a58'], u'mac_address': u'fa:16:3e:1b:47:89', u'project_id': u'46302f6a4ddc41d8b8cc8d0a29fbad9d', u'status': u'DOWN', u'description': u'', u'tags': [], u'dns_assignment': [{u'hostname': u'host-10-11-13-32', u'ip_address': u'10.11.13.32', u'fqdn': u'host-10-11-13-32.openstacklocal.'}], u'qos_policy_id': None, u'name': u'', u'admin_state_up': True, u'network_id': u'bf252d82-5601-4288-86c7-9ad8b9d877cb', u'dns_name': u'', u'created_at': u'2019-09-06T10:29:44Z', u'binding:vnic_type': u'normal', u'device_id': u'', u'tenant_id': u'46302f6a4ddc41d8b8cc8d0a29fbad9d'}, {u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': u'2019-09-06T10:29:44Z', u'device_owner': u'luis', u'revision_number': 1, u'port_security_enabled': True, u'fixed_ips': [{u'subnet_id': u'439fd50e-c547-438f-81c2-b3dca1e05729', u'ip_address': u'10.11.13.100'}], u'id': u'7d091415-6e29-4583-ac31-f51ac687fdc7', u'security_groups': [u'09316b40-c075-47a7-861b-d94e2bd47be4', u'6fd69032-b3b4-4384-aba9-dbcff71f3478', u'd20a8bf2-9b74-4c6f-9d54-4562945175e6', u'fddc1518-352f-4037-b74b-e186573e2a58'], u'mac_address': u'fa:16:3e:bf:3d:1f', u'project_id': u'46302f6a4ddc41d8b8cc8d0a29fbad9d', u'status': u'DOWN', u'description': u'', u'tags': [], u'dns_assignment': [{u'hostname': u'host-10-11-13-100', u'ip_address': u'10.11.13.100', u'fqdn': u'host-10-11-13-100.openstacklocal.'}], u'qos_policy_id': None, u'name': u'', u'admin_state_up': True, u'network_id': u'bf252d82-5601-4288-86c7-9ad8b9d877cb', u'dns_name': u'', u'created_at': u'2019-09-06T10:29:44Z', u'binding:vnic_type': u'normal', u'device_id': u'', u'tenant_id': u'46302f6a4ddc41d8b8cc8d0a29fbad9d'}] >>> ports[0]['security_groups'] [u'09316b40-c075-47a7-861b-d94e2bd47be4', u'6fd69032-b3b4-4384-aba9-dbcff71f3478', u'd20a8bf2-9b74-4c6f-9d54-4562945175e6', u'fddc1518-352f-4037-b74b-e186573e2a58'] So, the port request is with 3 given security groups... and the default one is added too (d20a8b...) (shiftstack) [stack@undercloud-0 ~]$ openstack security group list | grep d20a8bf2-9b74-4c6f-9d54-4562945175e6 | d20a8bf2-9b74-4c6f-9d54-4562945175e6 | default | Default security group | 46302f6a4ddc41d8b8cc8d0a29fbad9d | [] | (shiftstack) [stack@undercloud-0 ~]$ openstack port list --device-owner luis +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | 1a32142d-bb37-493a-b6c4-7956da8fd5bd | | fa:16:3e:1b:47:89 | ip_address='10.11.13.32', subnet_id='439fd50e-c547-438f-81c2-b3dca1e05729' | DOWN | | 7d091415-6e29-4583-ac31-f51ac687fdc7 | | fa:16:3e:bf:3d:1f | ip_address='10.11.13.100', subnet_id='439fd50e-c547-438f-81c2-b3dca1e05729' | DOWN | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ (shiftstack) [stack@undercloud-0 ~]$ openstack port show 1a32142d-bb37-493a-b6c4-7956da8fd5bd | grep security | port_security_enabled | True | | security_group_ids | 09316b40-c075-47a7-861b-d94e2bd47be4, 6fd69032-b3b4-4384-aba9-dbcff71f3478, d20a8bf2-9b74-4c6f-9d54-4562945175e6, fddc1518-352f-4037-b74b-e186573e2a58 Expected results: Only the requested sg are assigned to the ports.
Quick test on master devstack, I can reproduce with: $ cat bulk_secgroups.json { "ports": [ { "network_id": "226ac6be-753b-46d3-8636-5ad226297ab0", "security_groups": ["91e47ea5-3d9c-437b-a31e-0a2fd8d3181f"] } ] } $ export MY_TOKEN=$(openstack token issue -c id -f value) $ curl -H "X-Auth-Token: $MY_TOKEN" -X POST http://127.0.0.1:9696/v2.0/ports -d @bulk_secgroups.json | jq .ports[0].id "91ec92c1-4d9c-4a8b-b5d1-55769c4c75c2" openstack port show 91ec92c1-4d9c-4a8b-b5d1-55769c4c75c2|grep security_group | security_group_ids | 91e47ea5-3d9c-437b-a31e-0a2fd8d3181f, fc3f8c84-118e-46c5-8dc7-18d02ef926af $ openstack security group show default -f value -c id fc3f8c84-118e-46c5-8dc7-18d02ef926af
Fix is waiting on CI upstream, and then we should be able to fast-track it's backports upstream and downstream.
Verified on OSP 15 compose RHOS_TRUNK-15.0-RHEL-8-20190924.n.2 with openstack-neutron-14.0.3-0.20190923200444.5eb234b.el8ost.noarch Verified with Bernard's reproducer: $ cat bulk_secgroups.json { "ports": [ { "network_id": "ab8b704e-d5b0-441a-9b5e-1b24e7bd2822", "security_groups": ["a616bb52-b406-4da5-9540-374fde36de68", "c638097d-9713-45f4-adf5-1ef3339e71df", "e443f68c-1a50-489e-bdd4-e3c0b33b5af3"] } ] } $ export MY_TOKEN=$(openstack token issue -c id -f value) $ curl -H "X-Auth-Token: $MY_TOKEN" -X POST http://10.46.22.33:9696/v2.0/ports -d @bulk_secgroups.json | jq .ports[0].id % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1165 100 934 100 231 679 168 0:00:01 0:00:01 --:--:-- 680 "93be83d3-6b5d-4e8e-a3da-a31bde933df0" $ openstack port show 93be83d3-6b5d-4e8e-a3da-a31bde933df0 | grep security_group | security_group_ids | a616bb52-b406-4da5-9540-374fde36de68, c638097d-9713-45f4-adf5-1ef3339e71df, e443f68c-1a50-489e-bdd4-e3c0b33b5af3 | $ openstack security group show default -f value -c id 643dea5a-ceb4-4585-86a9-aa8188ce3431 The default security group is not being assigned now to the created port. Namespace isolation is achieved now in OCP on OSP with Kuryr (verified in openshift-ansible-3.11.146). Pods created in different namespaces cannot reach each other. $ oc new-project ns1 $ oc run --image kuryr/demo pod1 $ oc new-project ns2 $ oc run --image kuryr/demo pod2 $ oc -n ns1 get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod1-1-2c9bd 1/1 Running 0 4m 10.11.14.158 app-node-1.openshift.example.com <none> $ oc -n ns2 get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod2-1-6xdh5 1/1 Running 0 2m 10.11.15.23 app-node-0.openshift.example.com <none> $ oc -n ns1 rsh pod1-1-2c9bd ping 10.11.15.23 -c1 PING 10.11.15.23 (10.11.15.23): 56 data bytes --- 10.11.15.23 ping statistics --- 1 packets transmitted, 0 packets received, 100% packet loss command terminated with exit code 1 $ oc -n ns2 rsh pod2-1-6xdh5 ping 10.11.14.158 -c1 PING 10.11.14.158 (10.11.14.158): 56 data bytes --- 10.11.14.158 ping statistics --- 1 packets transmitted, 0 packets received, 100% packet loss command terminated with exit code 1 Pods cannot ping between them.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2957