Description of problem: When trying to get sharded routing to work we have exposed routes. However, we found that the routers were wrongly created hence deleted routers. We noticed that the list of ingress points in a route status never seems to get cleaned up and it still points to the the routers that was created earlier ]# oc get routes -n syseng-validation -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: 2016-07-07T14:37:45Z labels: app: hello-world name: hello-world namespace: syseng-validation resourceVersion: "1498409" selfLink: /oapi/v1/namespaces/syseng-validation/routes/hello-world uid: 5e42c371-4450-11e6-a054-001a4a1b5d47 spec: host: hello-world-syseng-validation.int.paas.qa.redhat.com port: targetPort: 8080-tcp tls: termination: edge to: We've been trying to get sharded routing to work (and having problems, for which I'll open a separate ticket) however, having exposed routes on the wrong router and also deleted routers, we've noticed that the list of ingress points in a route status never seems to get cleaned up: ]# oc get routes -n syseng-validation -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: 2016-07-07T14:37:45Z labels: app: hello-world name: hello-world namespace: syseng-validation resourceVersion: "1498409" selfLink: /oapi/v1/namespaces/syseng-validation/routes/hello-world uid: 5e42c371-4450-11e6-a054-001a4a1b5d47 spec: host: hello-world-syseng-validation.int.paas.qa.redhat.com port: targetPort: 8080-tcp tls: termination: edge to: kind: Service name: hello-world status: ingress: - conditions: - lastTransitionTime: 2016-07-07T14:37:45Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router - conditions: - lastTransitionTime: 2016-07-08T12:27:51Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router-internal - conditions: - lastTransitionTime: 2016-07-08T12:41:55Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router-external - conditions: - lastTransitionTime: 2016-07-13T14:43:58Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router-qa-internal-phx1 - conditions: - lastTransitionTime: 2016-07-13T14:57:45Z status: "True" type: Admitted host: hello-world-syseng-validation.ext.paas.qa.redhat.com routerName: router-qa-external-phx1 kind: List metadata: {} # oc get dc NAME REVISION REPLICAS TRIGGERED BY docker-registry 2 3 config router-qa-external-phx1 4 2 config router-qa-internal-phx1 1 2 config kind: Service name: hello-world status: ingress: - conditions: - lastTransitionTime: 2016-07-07T14:37:45Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router - conditions: - lastTransitionTime: 2016-07-08T12:27:51Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router-internal - conditions: - lastTransitionTime: 2016-07-08T12:41:55Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router-external - conditions: - lastTransitionTime: 2016-07-13T14:43:58Z status: "True" type: Admitted host: hello-world-syseng-validation.int.paas.qa.redhat.com routerName: router-qa-internal-phx1 - conditions: - lastTransitionTime: 2016-07-13T14:57:45Z status: "True" type: Admitted host: hello-world-syseng-validation.ext.paas.qa.redhat.com routerName: router-qa-external-phx1 kind: List metadata: {} # oc get dc NAME REVISION REPLICAS TRIGGERED BY docker-registry 2 3 config router-qa-external-phx1 4 2 config router-qa-internal-phx1 1 2 config This is causing the wrong domain name to show up in the web interface and confusing our developers. Actual results : The old routers are still seen under the ingress points Expected results : The routers should have been cleaned up when routers no longer exist
*** Bug 1323946 has been marked as a duplicate of this bug. ***
This has been merged into ocp and is in OCP v3.5.0.18 or newer.
hi, Found it has a little unfriendly to run this script steps: 1. curl -O https://raw.githubusercontent.com/openshift/origin/master/images/router/clear-route-status.sh 2. # ./clear-route-status.sh default ALL ./clear-route-status.sh: line 15: jq: command not found
Should we build a container for people to run? Instead of the script itself?
(In reply to Eric Paris from comment #13) > Should we build a container for people to run? Instead of the script itself? Why not use a scheduled job?
The decision was made to provide the script with better failure message when jq is not isntalled. In 3.6 we may provide a container which would allow one to easily automate this process.
Origin PR: https://github.com/openshift/origin/pull/12943
Note: There eventually needs to be a doc to cover how the user is expected to use this to properly administer a sharded router, or when a router is deleted.
This has been merged into ocp and is in OCP v3.5.0.21 or newer.
assigned this bug.. Please give the related docs as least. I will verified this bug according to the docs steps.
Thanks for the documentation, moving to ON_QA
Verified this bug step: $./clear-route-status.sh default service-unsecure1 route status for route service-unsecure1 in namespace default cleared
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0884