Bug 1001751
Summary: | Able to assign one floating IP to more instances. | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Jaroslav Henner <jhenner> |
Component: | openstack-nova | Assignee: | Brent Eagles <beagles> |
Status: | CLOSED ERRATA | QA Contact: | yfried |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.0 | CC: | beagles, breeler, dallan, jhenner, mlopes, ndipanov, oblaut, yeylon, yfried |
Target Milestone: | z2 | Keywords: | TestOnly, ZStream |
Target Release: | 4.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | openstack-nova-2013.2-0.25.rc1.el6ost | Doc Type: | Bug Fix |
Doc Text: |
Prior to this update, the Networking API implementation for associating floating IP addresses would not validate previous assignments of the same address, nor remove the association if one was found. As a result, floating IP addresses would appear to be assigned to multiple instances until background tasks reconciled the change in assignment.
This issue has been resolved with this update, and Networking now checks for previous assignments of a floating IP and immediately removes the association upon establishing the new association. Consequently, the floating IP address information available to Compute clients (python-novaclient, Dashboard) is immediately updated, and the apparent duplicate assignment no longer occurs under these conditions.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2014-03-04 20:12:40 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jaroslav Henner
2013-08-27 16:23:44 UTC
I am now not sure but IIRC, the problem is nova caching the floating ips. In the original description of this issue, it indicates that this happens "always". Is that the case? Do you have a system that exhibits this behavior that I can access. To date, I have not had a system that behaves this way except under very rare circumstances and even then only once. First note that above instructions are (I assume) invalid. The IP is first assigned to one VM and then the other. With this in mind, the booting of the VMs themselves is not particularly relevant, simply assigning the floating IP to one VM then the other is enough to create the apparent behavior. A few notes: - Changing floating IP assignment, especially with neutron, is a massively asynchronous activity. In grizzly floating ip assignment updates was much worse because it was not immediately removed from nova's cache of information when the request was made from nova->neutron. This is improved in havana so this behavior should not occur. - The duplicate assignment after a point is only apparent. Once neutron (in this case quantum) processes the assignement, the functional details of the assignment is "done". After a period of time, the cache is reconciled and the floating IP will appear in its proper place. - The code that would affect and improve this behavior is actually in the nova module that implements nova functions in terms of the neutron API so this issue *is* allocated to the correct component. The relevant code was committed to havana in July 2013, commit 49fdad5b6d1e0878438571c2e9c0421bc522cb2e. Using the same procedure in this BZ with the 4.0 code produces a more satisfactory result. I'm moving to modified with the package that seems to have been the first to have contained this code. please disregard comment 8 version Havana on RHEL6.5 [root@cougar16 ~(keystone_admin)]# rpm -qa | grep neutron python-neutronclient-2.3.1-3.el6ost.noarch python-neutron-2013.2.2-1.el6ost.noarch openstack-neutron-2013.2.2-1.el6ost.noarch openstack-neutron-openvswitch-2013.2.2-1.el6ost.noarch [root@cougar16 ~(keystone_admin)]# rpm -qa | grep nova openstack-nova-conductor-2013.2.2-2.el6ost.noarch openstack-nova-cert-2013.2.2-2.el6ost.noarch python-nova-2013.2.2-2.el6ost.noarch openstack-nova-api-2013.2.2-2.el6ost.noarch openstack-nova-compute-2013.2.2-2.el6ost.noarch openstack-nova-scheduler-2013.2.2-2.el6ost.noarch python-novaclient-2.15.0-2.el6ost.noarch openstack-nova-common-2013.2.2-2.el6ost.noarch openstack-nova-console-2013.2.2-2.el6ost.noarch openstack-nova-novncproxy-2013.2.2-2.el6ost.noarch Floating IP is moved from instance A to instance B [root@cougar16 ~(keystone_admin)]# nova add-floating-ip server1 10.35.166.3 [root@cougar16 ~(keystone_admin)]# nova list +--------------------------------------+---------+--------+------------+-------------+---------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------+--------+------------+-------------+---------------------------------+ | 13bc8aa1-8991-4ad8-8cd0-e609c28617db | server1 | ACTIVE | None | Running | private=172.20.0.2, 10.35.166.3 | | 2fbfea5e-ac22-4671-a3e0-cf2d7e178d83 | server2 | ACTIVE | None | Running | private=172.20.0.5 | +--------------------------------------+---------+--------+------------+-------------+---------------------------------+ [root@cougar16 ~(keystone_admin)]# nova add-floating-ip server2 10.35.166.3 [root@cougar16 ~(keystone_admin)]# nova list +--------------------------------------+---------+--------+------------+-------------+---------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------+--------+------------+-------------+---------------------------------+ | 13bc8aa1-8991-4ad8-8cd0-e609c28617db | server1 | ACTIVE | None | Running | private=172.20.0.2 | | 2fbfea5e-ac22-4671-a3e0-cf2d7e178d83 | server2 | ACTIVE | None | Running | private=172.20.0.5, 10.35.166.3 | +--------------------------------------+---------+--------+------------+-------------+---------------------------------+ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-0213.html |