Description of problem: The problem is related to the way, the `router` works when using `NAMESPACE_LABELS`. When `NAMESPACE_LABELS` are being used for `router` sharding, the `router` applies the following technique to validate and apply a route. - The first plugin in this chain is the `HostAdmitter`, which does some validation and which is the one that is causing the errors in the `router` log as well in the status field of the `route` - The second plugin in the chain is the `UniqueHost` plugin, which keeps track of which `namespaces` it sees and filters out routes that aren't in any of those `namespaces`. Currently a fix (https://bugzilla.redhat.com/show_bug.cgi?id=1491717) for `HostAdmitter` is in the works, which raises the default log level for the same and thus will avoid the error messages reported on the `router` console. But that won't solve the problem, as the `route` status is still updated (as before, with the message that validation failed due to the `ALLOWED_DOMAIN` option). So the problem here is, that `HostAdmitter` is run before `UniqueHost` is run, which is causing all `routes` to be verified by all `routers` and thus reporting back an error if validation is failing. The good thing is, it does not have any impact on the availability of the platform or it's functionality. Everything is and will continue to work the way expected. The problem is therefore only kind of visualization within the Web-Console and the `route` status. Problem is, we can't change the Web-Console, as we actually would like to see the `route` status, especially if something is not OK. We therefore want to track the possibility of re-shuffling the `UniqueHost` and `HostAdmitter` plugin if possible to validate the route after selection was applied with `UniqueHost`. Version-Release number of selected component (if applicable): - ose-haproxy-router:v3.6.173.0.96-2 How reproducible: - Always Steps to Reproduce: 1. Setup router sharding as per https://docs.openshift.com/container-platform/3.6/architecture/networking/routes.html#router-sharding and https://docs.openshift.com/container-platform/3.6/install_config/router/default_haproxy_router.html#using-router-shards using `NAMESPACE_LABELS` 2. Configure one `router` with `ALLOWED_DOMAIN` 3. Create a new route for a specific `NAMESPACE_LABELS` 4. Check route status for an error "Rejected by router" Actual results: `route` reports "Rejected by router" because it's validation is failing due to `ALLOWED_DOMAIN` Expected results: No error to be reported, as validation of `ALLOWED_DOMAIN` should only happen on the `router` the `route` is applied Additional info: Issue was discussed with owner of https://bugzilla.redhat.com/show_bug.cgi?id=1491717.
Fix in PR: https://github.com/openshift/origin/pull/19330
this steps 1. update router oc env dc router NAMESPACE_LABELS=team=red ROUTER_ALLOWED_DOMAINS=test.zzhao.com 2. create some route with user1, and check the routes hostname are corrrect $ oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD edge1 header.edge.example.com header-test-insecure http edge None edge32 32header.edge.example.com header-test-insecure http edge None 3. Add label for the namespace team=red using admin user oc label namespace z1 team=red 4. Check all routes again $ oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD edge1 RouteNotAdmitted header-test-insecure http edge None edge32 RouteNotAdmitted header-test-insecure http edge None 5. Modified the namespace label to team=blue oc label namespace z1 team=blue 6. Check all routes again, the route hostname still 'RouteNotAdmitted' $ oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD edge1 RouteNotAdmitted header-test-insecure http edge None edge32 RouteNotAdmitted header-test-insecure http edge None
sorry forget to added the tested version # oc version oc v3.10.0-0.47.0 kubernetes v1.10.0+b81c8f8 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://ip-172-18-10-50.ec2.internal:8443
When you change the label so that the sharded router no longer sees it, nothing cleans up the status. Please see https://docs.openshift.com/container-platform/3.4/architecture/core_concepts/routes.html#route-status-field
According to comment 6, when using ./clear-route-status.sh z1 ALL this routes will be back. Verified this bug .
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816