Bug 1543300
Summary: | router port binding fails with dvr and service subnets | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Luca Miccini <lmiccini> | |
Component: | openstack-neutron | Assignee: | Brian Haley <bhaley> | |
Status: | CLOSED ERRATA | QA Contact: | Toni Freger <tfreger> | |
Severity: | low | Docs Contact: | ||
Priority: | low | |||
Version: | 11.0 (Ocata) | CC: | amuller, bhaley, chrisw, nyechiel, ragiman, srevivo | |
Target Milestone: | z5 | Keywords: | Triaged, ZStream | |
Target Release: | 11.0 (Ocata) | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | openstack-neutron-10.0.4-7.el7ost | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1558090 1558094 (view as bug list) | Environment: | ||
Last Closed: | 2018-05-18 16:56:10 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1558090, 1558094 |
Description
Luca Miccini
2018-02-08 08:13:13 UTC
So I think I see the problem. On the original port creation, _ipam_get_subnets() was called with the port's device_owner as the service_type argument. But this code path in ipam_backend_mixin.py that is calling _ipam_get_subnets() is not passing the old port's device_owner. In your case, there is no "fallback" subnet without a service_type set, so the allocation fails. I think this one-line change would fix it in neutron/db/ipam_backend_mixin.py:update_port(): valid_subnets = self._ipam_get_subnets( context, old_port['network_id'], host, service_type=old_port.get('device_owner')) If you have a setup you can test on where you could modify the code and restart neutron-server to verify it helps I'd appreciate the feedback, else I'll try and do this locally soon. I also think it would be good to have a more descriptive message that also tells the network_id and the service_type that is failing, that might have helped me narrow this down a little quicker. I will make that change as well. (In reply to Brian Haley from comment #2) > So I think I see the problem. > > On the original port creation, _ipam_get_subnets() was called with the > port's device_owner as the service_type argument. But this code path in > ipam_backend_mixin.py that is calling _ipam_get_subnets() is not passing the > old port's device_owner. In your case, there is no "fallback" subnet > without a service_type set, so the allocation fails. > > I think this one-line change would fix it in > neutron/db/ipam_backend_mixin.py:update_port(): > > valid_subnets = self._ipam_get_subnets( > > context, old_port['network_id'], host, > > service_type=old_port.get('device_owner')) > > If you have a setup you can test on where you could modify the code and > restart neutron-server to verify it helps I'd appreciate the feedback, else > I'll try and do this locally soon. > > I also think it would be good to have a more descriptive message that also > tells the network_id and the service_type that is failing, that might have > helped me narrow this down a little quicker. I will make that change as > well. Thanks Brian, Lab is still available, I'll test and report ASAP. Cheers Luca Hi Brian, working like a charm :) [stack@undercloud-11 ~]$ openstack port list +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | 040736af-1130-4aae-8378-ba09dee4e782 | | fa:16:3e:de:71:f1 | ip_address='172.16.0.8', subnet_id='126d91a6-58b0-415b-9ac4-965c960708f2' | ACTIVE | | 0dee6381-9637-466b-9694-426a474e16d1 | | fa:16:3e:65:14:e9 | ip_address='172.16.0.3', subnet_id='126d91a6-58b0-415b-9ac4-965c960708f2' | ACTIVE | | 4b271798-e03a-455b-af10-ee963f5f904a | | fa:16:3e:ea:6c:5f | ip_address='192.168.30.2', subnet_id='a37cbc9e-7667-465b-8de8-191d76c739be' | ACTIVE | | 6d796319-6376-427f-bfd8-a9448b8d3e45 | | fa:16:3e:a1:a5:c8 | ip_address='172.16.0.2', subnet_id='126d91a6-58b0-415b-9ac4-965c960708f2' | ACTIVE | | 8ccf2b8b-ee37-4e43-9fe7-4d899fd0ba81 | | fa:16:3e:13:10:9e | ip_address='192.168.10.8', subnet_id='78e06d59-a3ba-4afd-83d6-da70d605e944' | ACTIVE | | 9399805c-79ff-49d8-9a42-be844a92e576 | | fa:16:3e:4d:d8:a5 | ip_address='172.16.0.7', subnet_id='126d91a6-58b0-415b-9ac4-965c960708f2' | ACTIVE | | b0ed23c9-f9d2-4d05-a969-73a88be07023 | | fa:16:3e:a1:52:27 | ip_address='172.16.0.1', subnet_id='126d91a6-58b0-415b-9ac4-965c960708f2' | ACTIVE | | cff5f3fd-bf39-4438-8ae7-ff746550b869 | | fa:16:3e:14:68:50 | ip_address='192.168.20.2', subnet_id='8bf44b3f-afcd-4e12-b4de-0903ea34cfda' | N/A | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ [stack@undercloud-11 ~]$ openstack subnet show a37cbc9e-7667-465b-8de8-191d76c739be +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.30.2-192.168.30.254 | | cidr | 192.168.30.0/24 | | created_at | 2018-02-12T17:55:35Z | | description | | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 192.168.30.1 | | host_routes | | | id | a37cbc9e-7667-465b-8de8-191d76c739be | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | demo-snat-subnet | | network_id | 99b466e9-0108-4176-aa0b-dc43f9e583e9 | | project_id | 08493cd2fc7c46d5b1aeaa7a3dce43b8 | | revision_number | 3 | | segment_id | None | | service_types | network:router_gateway | | subnetpool_id | None | | updated_at | 2018-02-12T17:55:35Z | +-------------------+--------------------------------------+ [stack@undercloud-11 ~]$ openstack subnet show 8bf44b3f-afcd-4e12-b4de-0903ea34cfda +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.20.2-192.168.20.254 | | cidr | 192.168.20.0/24 | | created_at | 2018-02-12T17:55:40Z | | description | | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 192.168.20.1 | | host_routes | | | id | 8bf44b3f-afcd-4e12-b4de-0903ea34cfda | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | demo-floating-ip-subnet | | network_id | 99b466e9-0108-4176-aa0b-dc43f9e583e9 | | project_id | 08493cd2fc7c46d5b1aeaa7a3dce43b8 | | revision_number | 3 | | segment_id | None | | service_types | network:floatingip | | subnetpool_id | None | | updated_at | 2018-02-12T17:55:40Z | +-------------------+--------------------------------------+ connectivity looks good as well: [root@garbd ~]# ping 192.168.30.2 PING 192.168.30.2 (192.168.30.2) 56(84) bytes of data. 64 bytes from 192.168.30.2: icmp_seq=1 ttl=64 time=1.11 ms 64 bytes from 192.168.30.2: icmp_seq=2 ttl=64 time=0.481 ms ^C --- 192.168.30.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.481/0.797/1.114/0.317 ms [root@garbd ~]# ping 192.168.20.2 PING 192.168.20.2 (192.168.20.2) 56(84) bytes of data. ^C [root@garbd ~]# ping 192.168.20.2 PING 192.168.20.2 (192.168.20.2) 56(84) bytes of data. 64 bytes from 192.168.20.2: icmp_seq=11 ttl=62 time=1.20 ms 64 bytes from 192.168.20.2: icmp_seq=12 ttl=62 time=0.834 ms 64 bytes from 192.168.20.2: icmp_seq=13 ttl=62 time=0.777 ms thank you very much for the fast turnaround! Thanks for the quick testing, I'll get an upstream review up with the fix and work on a backport. https://review.openstack.org/#/c/545615/ is now passing, was just a random upstream failure, I will start the backports to OSP. (In reply to Brian Haley from comment #7) > https://review.openstack.org/#/c/545615/ is now passing, was just a random > upstream failure, I will start the backports to OSP. Hi Brian, I've seen the patch merged. Do you think it would be feasible to have this patch backported before the next OSP11 maint release? Thanks Luca OSP11 change: https://code.engineering.redhat.com/gerrit/#/c/133138/ I'll clone this bug since it's also needed in 12 and 13. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1614 |