Bug 1688323
Summary: | Namespace isolation fails due to race on neutron | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | GenadiC <gcheresh> | |
Component: | openstack-neutron | Assignee: | Nate Johnston <njohnston> | |
Status: | CLOSED ERRATA | QA Contact: | Jon Uriarte <juriarte> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 13.0 (Queens) | CC: | akatz, amuller, bhaley, chrisw, ekuris, itbrown, jlibosva, juriarte, ltomasbo, njohnston, racedoro, scohen, slinaber | |
Target Milestone: | z11 | Keywords: | Reopened, Triaged, ZStream | |
Target Release: | 13.0 (Queens) | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | openstack-neutron-12.1.0-6.el7ost | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1779369 1779374 (view as bug list) | Environment: | ||
Last Closed: | 2020-03-10 11:26:10 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1800847 | |||
Bug Blocks: | 1779369, 1779374 |
Description
GenadiC
2019-03-13 14:39:50 UTC
Succeeded to reproduce it again. It happened on OSP14 -p 2019-02-22.2 Genadi, Can you share what procedure you followed to reproduce this on OSP 14? Thanks, Nate (In reply to Nate Johnston from comment #6) > Genadi, > > Can you share what procedure you followed to reproduce this on OSP 14? > > Thanks, > > Nate Just to give a bit more context... The way OCP defines namespace isolation is the next: - Pods in a namespace can only reach other pods in their on namespace plus the ones on the default namespace - Pods in a namespace can only be reached by other pods in their own namespace plus the ones on the default namespace. In kuryr we are translating that into OpenStack security groups, so, we create an specific SG group for each namespace and attach them to the pods in that namespace. This SG will allow all the traffic from other ports which include that SG id. For handling the default namespace exception, we are also creating a couple of extra SGs, one (SG_default) attached to the pods on the default namespace, which will allow all the traffic from ports with the other SG (SG_namespace). And the second one (SG_namespace) that is attached to all the pods but the ones on the default namespace, and which will allow all the traffic from the ports that has SG_default SG. I used the automation test to reproduce the problem Using the test from kuryr-tempest-plugin did the job. Run the test test_namespace_sg_svc_isolation in the kuryr-tempest-plugin/kuryr_tempest_plugin/tests/scenario/test_namespace.py Genadi, Can you let me have access to a reproducer environment? I've tried a couple of ways to reproduce this and I keep having issues, and now I'm about to lose my DSAL host. Thanks, Nate Nate, I am trying to create an environment so you can see the problem, Will update when ready Provided an environment to Nate as we agreed, so he can take all the logs and see the problem live Agree with Luis to give Networking guys decide on blocker/non blocker part Tried to reproduce for the last week without any success, so closing it for now I'm running kuryr_tempest_plugin.tests.scenario.test_namespace.TestNamespaceScenario.test_namespace_sg_svc_isolation after running a test that restarts Neutron. I'm able to reach from the pod on one namespace the pod/service on another namespace. I gave Nate a setup to check. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0770 |