Bug 1275540 - [networking_132]An entry exists for the route in other namespace in router which is created in specfic namespace
Summary: [networking_132]An entry exists for the route in other namespace in router wh...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OKD
Classification: Red Hat
Component: Networking
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Ravi Sankar
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-27 08:18 UTC by Yan Du
Modified: 2015-11-23 21:17 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-23 21:17:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yan Du 2015-10-27 08:18:17 UTC
Description of problem

An entry exists for the route in other namespace in router which is created in specific namespace


Version-Release number of selected component (if applicable):
multi-node env with multi-tenant plugin
oc v1.0.6-964-g814c05e-dirty
kubernetes v1.2.0-alpha.1-1107-g4c8e6f4



How reproducible:
Always



Steps to Reproduce:

1. oc new-project d1

2. Create service account and edit scc, then create router in project d1 with --host-network=false option

#oadm  router --create  --credentials=/root/src/openshift.local.config/master/openshift-router.kubeconfig  --service-account=router --replicas=3 -n d1 --host-network=false

3. Create pod/service/route in project d1
# oc get route -n d1
NAME           HOST/PORT                                           PATH      SERVICE        LABELS              TLS TERMINATION
test-service   test-service-d1.router.default.svc.cluster.local             test-service   name=test-service  

4. oc new-project d2

5. Create pod/service/route in project d2
# oc get route -n d2
NAME           HOST/PORT                                           PATH      SERVICE        LABELS              TLS TERMINATION
test-service   test-service-d2.router.default.svc.cluster.local             test-service   name=test-service  

6. Try to access the pod in project d2 from router pod

7. Check the os_http_be.map file in router pod




Actual results:

6. Could not ping successfully from the router pod

7.  But the entry exists in os_http_be.map file
# cat os_http_be.map 
test-service-d2.router.default.svc.cluster.local d2_test-service
test-service-d1.router.default.svc.cluster.local d1_test-service



Expected results:
The entry should not exist in router, since the router is created in specific namespace and  it would only load balance to pods in that namespace

Comment 1 Ben Bennett 2015-10-28 15:49:14 UTC
Based on conversations with eparis and ccoleman:

* This is "functions as designed".  A router in project 1 can accept routes from project 2
* However, if the two are isolated, then they can not communicate
* So you either need to join the networking of the two, or make one project global

What we should do:
* Warn the user if they add a route to a router in a different project and the two projects do not have permission to communicate
* Note: if they are not using the multi-tenant networking plugin, then the two will be able to communicate regardless of whether the networks have been joined

QA:
* You need to test this with the multi-tenant plugin enabled to make sure that isolation correctly prevents the two from talking

Comment 2 Yan Du 2015-10-29 10:12:57 UTC
@Ben

I still have an question. could you please kindly help for that?

1. Create router in project1 by default (--host-network=true), the router will get a IP same as node.
2. Create some pods in project2. 

Try to ping the pod in project2 from router pod in project1, we could ping successfully. 

Is it a right behavior?

Comment 3 Eric Paris 2015-10-29 16:24:08 UTC
@Yan Du

this was using the multi-tenant plugin or the old 3.0 plugin?

Comment 4 Ben Bennett 2015-10-29 16:54:15 UTC
@Yan Du:

I believe it is the right behavior.  But we probably should document it more clearly.  With the new router you are giving it privilege and it is using it to access the host networking.  So it is bypassing the pod SDN and can't be restricted by the multi-tenant networking plugin.

Comment 5 Clayton Coleman 2015-10-29 18:19:49 UTC
Note (unrelated to the host network / SDN problem) that when you create a router, it will by default try to look at all routes in all namespaces.  You have to edit the router deployment config to tell it to only look at specific namespaces.

Comment 6 Ram Ranganathan 2015-10-29 18:23:08 UTC
@Yan Du this is expected behavior. For the host-network case, that is equivalent to exposing the route externally, so it should work even with multi-tenant isolation. If a user wants true isolation, then don't add an external route to the pod.
 
And as re: the issue with routes appearing on routers with container networking, that one is current behavior but as Ben mentioned we should probably document it. 
Maybe file an RFE/Origin Issue (or we can use this bug for it) to only add/show/use  routes based on a selector would be cleaner so that we only add the routes that are serviced by the router.

Comment 7 Ravi Sankar 2015-10-29 22:11:53 UTC
Limitation documented in https://github.com/openshift/openshift-docs/pull/1124

Trello card for restricting router routes to selected namespaces: https://trello.com/c/xnSTCLnx

Comment 8 Yan Du 2015-10-30 05:11:38 UTC
Move bug to verified according the above comments. Thanks


Note You need to log in before you can comment on or make changes to this bug.