Red Hat Bugzilla – Bug 986832
Horizon displays IPs allocated to other projects.
Last modified: 2015-02-15 17:02:29 EST
Description of problem: When user has admin role, he gets list of floating IPs allocated to other project when assigning IP to an instance. The IP can be associated only to instance of same project as the IP is allocated to, otherwise this operation fails. This is not convenient because this requires admin to remember his IPs. Version-Release number of selected component (if applicable): python-django-horizon-2013.1.2-2.el6ost.noarch openstack-nova-common-2013.1.2-4.el6ost.noarch openstack-quantum-2013.1.2-4.el6ost.noarch python-quantumclient-2.2.1-2.el6ost.noarch How reproducible: Always Steps to Reproduce: 1. Have two tenants, be admin in one of them 2. Allocate an IP in the tenant where you don't have admin role. 3. Try assigning it to a instance in the project where you ahve admin Actual results: The IP allocated to other project is offered. The IP cannot be assigned, failing with: Error: Bad floatingip request: Port 9d6a9406-3540-4ca3-9fac-094f771d9f47 is associated with a different tenant than Floating IP ba65f746-bc06-4d8e-b7b8-49bb11aeddfb and therefore cannot be bound. Error: Unable to associate IP address 10.34.68.211. Expected results: Only IPs allocated to a project are offered. Additional info: quantum floatingip-list also displays all allocated Floating IPs. Maybe it should be fixed also on the client level. We probably need an equivalent of `nova list --all-tenants`.
quantum floatingip-list is meant to return IPs only for "a given tenant", though I don't see a way to explicitly set a tenant argument and ensure we don't also get IPs from other tenants. It seems floating ips are an extension and this may depend on the plugin in use? Adding a needsinfo on our quantum SME for thoughts about this. Thanks!
A fix has been proposed upstream for Havana RC1, we should backport it once it's merged.
Fix was merged upstream for Havana RC1 (3827a7e73a). Proposed backport to grizzly stable branch upstream at https://review.openstack.org/#/c/50406/
Upstream backport merged in 2013.1.4
Needinfo: Julie Pichon Thanks for your email. I reworded the Doc Text, could you please check that it is correct. Specifically, I'm not sure if I interpreted the "(Neutron)" correctly to mean this only happened when OpenStack Networking was used for networking?
Thank you for rewording, Bruce, this looks good to me. The "(Neutron)" comment indeed meant to indicate that this only applies when using Openstack Networking.
Tested NVR: python-django-horizon-2013.1.4-1 Grizzly 2013-10-24.5 The Scenario described in Comment #0 still reproduces. To Reproduce: ============= 1. Have two tenants 2. Create a separated external network for each tenant (that includes an internal network + router to link them). 3. Launch an instance in one of the tenants 4. Associate Floating IP to that instance 5. Click + and select the pool of the other tenant. 6. Click Associate Actual results: =============== The IP address suggested belongs to the other pool. Association fails with the following error: Error: Bad floatingip request: Port 9d6a9406-3540-4ca3-9fac-094f771d9f47 is associated with a different tenant than Floating IP ba65f746-bc06-4d8e-b7b8-49bb11aeddfb and therefore cannot be bound. Error: Unable to associate IP address 10.34.68.211. Expected results: ================= Only IPs allocated to the current tenant should be listed. Error: External network ed6ff3a9-adb4-427e-a4c4-36c241ffcc83 is not reachable from subnet ebb54939-496d-413e-b38f-6168a031b95f. Therefore, cannot associate Port 17835e61-8583-4c54-a263-aeee0d08ee4b with a Floating IP. Error: Unable to associate IP address 10.10.10.4.
Hi Nir, Thank you for the detailed steps, I believe you found a new bug. Would you mind reporting it separately (upstream as well)? The steps you describe are different from comment 0, and this new problem also affects regular users even if they don't have the 'admin' role in any tenant. The fix should focus specifically on the floating IPs pool list, to prevent people from allocating IPs from a non-shared pool belonging to another tenant in the first place. It's a valid bug and absolutely should be fixed. Based on the information available in comment 0, here's how I tested the fix for this bug, with one shared external network: 1. Have 2 tenants, 'admin' and 'rhos'. Have the 'admin' role in the 'admin' tenant. 2. Allocate one IP in the 'rhos' tenant (e.g. 192.168.44.10) and one IP in the 'admin' tenant (e.g. 192.168.44.11) 3. Start an instance in the 'admin' tenant and click on the 'Associate IP' list: Before the patch, unexpected: 4. Both '192.168.44.10' and '192.168.44.11' are displayed, but using the .10 one is not possible and would cause the error message. After the patch, closer to expected: 4. Only '192.168.44.11' is displayed. In my opinion although they are related, these are 2 different problems. If you disagree, then we should drop this bz from this rebase / errata as it will need to be fixed upstream first.
I'm having trouble reproducing comment 9 in order to get the same error message about the wrong tenant - I get the 'network unreachable' message, which indicates a router needs to be added. It seems External Networks are implicitly shared at the moment and cannot be restricted to a single tenant, see e.g. https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks : "Currently the concept of 'external' network is somewhat similar to the concept of a 'shared' network. However, while every tenant can operate on a shared network, performing operations such as creating port, the set of operations a tenant can perform on an external network is more limited, as it's currently restrained to setting external gateways on routers and creating floating IPs. Nevertheless, the concept of 'external' implies some forms of sharing, and this has some bearing on the topologies that can be achieved. For instance it is not possible at the moment have an external network which is reserved to a specific tenant. That external network will always show up in queries performed by other tenants too." Setting back to MODIFIED as the initial bug is fixed - we can debug the other one in another report. If you could include the specific steps on network creation to get the 'wrong tenant' error message, that would also be helpful. Thanks!
(In reply to Julie Pichon from comment #10) > Hi Nir, > <snipped> > > Based on the information available in comment 0, here's how I tested the fix > for this bug, with one shared external network: > > 1. Have 2 tenants, 'admin' and 'rhos'. Have the 'admin' role in the 'admin' > tenant. > 2. Allocate one IP in the 'rhos' tenant (e.g. 192.168.44.10) and one IP in > the 'admin' tenant (e.g. 192.168.44.11) > 3. Start an instance in the 'admin' tenant and click on the 'Associate IP' > list: > > Before the patch, unexpected: > > 4. Both '192.168.44.10' and '192.168.44.11' are displayed, but using the > .10 one is not possible and would cause the error message. > > After the patch, closer to expected: > > 4. Only '192.168.44.11' is displayed. > > In my opinion although they are related, these are 2 different problems. If > you disagree, then we should drop this bz from this rebase / errata as it > will need to be fixed upstream first. Hi Julie, Thanks for looking into this and making the scenario more clear. I re-tested, followed the steps you described. the result was as you specified it to be after the patch (the listed floating IP was the IP allocated to the current tenant). Moving this bug and Bug #1022748 back to Verified. As for The scenario in Comment #9, I will file a new BZ with details on how exactly I created the network topology, I basically created an internal and external network for each tenant (meaning 2 external networks) and in addition to that created a router for each tenant.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1510.html