*** Bug 1289988 has been marked as a duplicate of this bug. ***
A work around would be to disable l2pop entirely in the cluster. Was this brought up with the customer?
@Anil - Do you think this patch could explain the issue? It's the only L2pop fix I could find that seems relevant that is not already available in OSP 5.
Assaf, Anil, Nokia would gladly test this if we can make it available to them. Please let me know as soon as possible as this issue is now blocking for Nokia. Thank you, Scott
Assaf, Anil, Anande, It seems this bug was mis-filed. The customer having this issue is on OSP6 not OSP5. I've modified the BZ appropriately. -Scott
The patch I linked may be relevant. It's available from 2014.2.4 and the customer is on 2014.2.3-9. Can we try applying that patch, or doing a minor upgrade and see if it helps?
Agree with Assaf. "l2pop", which creates tunnels, in this case is called when either get_bound_port_context(through _commit_port_binding) or update_port_status called. Can you please try Assaf's suggestion? Can we get access to the setup to debug further?
Anil, By "access to the system, do you mean "Bomgar" ? If so, can you coordinate with customer through the SFDC/portal case 01514466? I know they will be available for this today.
The patch is already available in 2014.2.4, just have them perform a minor upgrade.
My mistake, we haven't released nor provided a build based on 2014.2.4. We'll work on it.
Is the customer using multiple rpc and api workers? If so, there is no fix for now and only solution is disable l2pop. A bug is already raised for this https://review.openstack.org/#/c/269212/ Even with single api and rpc workers also the same issue can be seen during bulk migrations. So better to disable l2pop for now to solve migration issues.
link to bug https://bugzilla.redhat.com/show_bug.cgi?id=1289995
Martin Schuppert, Can you please provide a build to customer with below patches https://review.openstack.org/#/c/272566 https://review.openstack.org/#/c/163178/ and let us know if it solves this issue?
Thanks Martin Schuppert No objections. If the patch is solving this issue, we will take the patch into OSP6. Can you please share your build with customer and ask them to test? Thanks Anil
Anand and Martin Schuppert Any update on this bug? Thanks Anil
Verified that the test is in. rpm -qa | grep neutr openstack-neutron-openvswitch-2014.2.3-37.el7ost.noarch python-neutronclient-2.3.9-2.el7ost.noarch openstack-neutron-common-2014.2.3-37.el7ost.noarch openstack-neutron-2014.2.3-37.el7ost.noarch openstack-neutron-ml2-2014.2.3-37.el7ost.noarch python-neutron-2014.2.3-37.el7ost.noarch
(In reply to Alexander Stafeyev from comment #71) > Verified that the test is in. > > rpm -qa | grep neutr > openstack-neutron-openvswitch-2014.2.3-37.el7ost.noarch > python-neutronclient-2.3.9-2.el7ost.noarch > openstack-neutron-common-2014.2.3-37.el7ost.noarch > openstack-neutron-2014.2.3-37.el7ost.noarch > openstack-neutron-ml2-2014.2.3-37.el7ost.noarch > python-neutron-2014.2.3-37.el7ost.noarch That the CODE***** is in . Sorry :)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1104.html