In investigating bug 1780398, I noticed that the router logs had thousands of log messages that were identical but for the timestamp: E1205 20:10:23.002883 1 router_controller.go:244] another route has claimed this host This log message is unhelpful because it does not identify the involved routes or the host. Moreover, it is redundant because the router emits another log message, at a higher log level, that does identify the routes and host, and the same information should be reflected in the route's status as well. Thus the router should not emit the above log message. +++ This bug was initially created as a clone of Bug #1780398 +++ Description of problem: Upgrading a cluster with 2K projects from 4.2.9 to 4.3.0-0.nightly-2019-12-05-073829 failed with monitoring hung for an hour. [...] --- Additional comment from Miciah Dashiel Butler Masters on 2019-12-06 18:58:26 UTC --- The route in question was created at time 2019-12-05T20:09:20Z. From 2019-12-05T20:09:41Z to 2019-12-05T20:12:15Z, the API was returning "forbidden: not yet ready to handle request" errors (not only to the router: I see the same error in logs for the auth operator, samples operator, CVO, and kube-controller-manager). The router may have admitted the route, but the router failed to update the route's status due to the API outage. The router is designed to be able to function with limited privileges. It is not clear to me whether the router should retry on forbidden errors, or whether the API is incorrect in returning a forbidden error for a request that should be retried.
In each router pod's logs, I see 9995 instances of this message logged in a period of about 11 seconds. Just to make sure I'm not missing a more serious issue, can you confirm that the test in question created about 9995 routes with conflicting host names?
The test does create a large number of artificial routes from the same template. I will open an issue to make those routes more realistic.
verified with 4.4.0-0.nightly-2019-12-20-210709 and issue has been fixed. create two routes with same hostname, then oc -n openshift-ingress logs router-default-568c75cc44-8jgqf | grep "claimed" repeat the same steps with old build then $ oc -n openshift-ingress logs router-default-678757dd74-5nzc5 | grep "claimed" E1223 08:10:14.876097 1 router_controller.go:244] another route has claimed this host
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581