Description of problem: We needed to increase the pool of floating ips, but it is not possible: [root@controller ~(keystone_admin)]$ quantum subnet-update a52b0d51-c0a8-4eb2-b734-7a6f79f440bc --allocation-pool start=10.x.y.128,end=10.x.y.248 Unrecognized attribute(s) 'allocation_pool' [root@controller ~(keystone_admin)]$ quantum subnet-update a52b0d51-c0a8-4eb2-b734-7a6f79f440bc --allocation-pools start=10.x.y.128,end=10.x.y.248 Cannot update read-only attribute allocation_pools Version-Release number of selected component (if applicable): openstack-quantum-2013.1.4-3.el6ost.noarch python-quantumclient-2.2.1-2.el6ost.noarch How reproducible: always Steps to Reproduce: 1. Have a (external) network, with a subnet 2. try updating the range 3. Actual results: not possible Expected results: range updated Additional info:
The allocation pool setting is specifically listed as create/read-only in the upstream API docs: https://wiki.openstack.org/wiki/Neutron/APIv2-specification#Subnet. So neutron is behaving as-specified in this case, so this is not a bug and certainly doesn't have a severity of "high". There is a blueprint that has been opened for a while on this: https://blueprints.launchpad.net/neutron/+spec/make-allocation-pool-updatable but no progress has been made and no target milestone set, so obviously a target of 4.0 here is inappropriate. jhenner: as this is a feature request, do you still want to keep this bug open?
The fact that it is readonly makes impossible to increase the range without deleting all the ports (floating ips) that has been associated. It is very inconvenient. If one runs out of the floating IPs, there is probably no way to enlarge the pool. That' why I think it is high severity.
I would call this a design-flaw removal request.
The functionality is available in OSP 6
Hi, Have several questions. 1)As I see within BluePrint explanation: https://blueprints.launchpad.net/neutron/+spec/make-allocation-pool-updatable "Then it will ensure that the new range doesn't exclude any IPs that are currently in use. It will fail if the new pool excludes IPs currently in use" Actual behaviour is, when the subnet update is decreasing the number of IP's in the range, and FloatingIp is already in use, the update succeeds and the excluded FloatingIp continue to function. From the customer point of view I am also thinking that in the current situation the update should have failed. Could you please clarify the expected behaviour for me? 2)Is this just CLI new functionality or it should be also in Horizon?
Hi, I've finished the test plan, review will be appreciated https://tcms.engineering.redhat.com/case/447857/?from_plan=14180 Thanks, Toni
(In reply to Toni Freger from comment #11) > Hi, > > Have several questions. > > 1)As I see within BluePrint explanation: > https://blueprints.launchpad.net/neutron/+spec/make-allocation-pool-updatable > > "Then it will ensure that the new range doesn't exclude any IPs that are > currently in use. It will fail if the new pool excludes IPs currently in use" > > Actual behaviour is, when the subnet update is decreasing the number of IP's > in the range, and FloatingIp is already in use, the update succeeds and the > excluded FloatingIp continue to function. > > From the customer point of view I am also thinking that in the current > situation the update should have failed. > > Could you please clarify the expected behaviour for me? I looked on implementation and it's not explicitly checking whether any fip is out of new subnet range. Specifically it validates only that start and end of allocation pool are valid, that new pool doesn't overlap with existing one and that new pool falls into subnet. > > 2)Is this just CLI new functionality or it should be also in Horizon? This is API functionality. I'm not sure how Horizon behaves here.
I think that we need to inform user. Actually he will have active IPs that won't be part of the existing new pool.
I've opened 2 U/S bugs https://bugs.launchpad.net/neutron/+bug/1410171 https://bugs.launchpad.net/neutron/+bug/1410173 Thanks, Toni
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2015-0148.html