QE looking for help verifying, I assume you are their best bet @wking.
repeating the query from , but with a reduced maxAge because  landed in 4.8 4 days ago:
$ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=Watchdog+alert+had+missing+intervals&maxAge=72h&type=junit' | grep 'failures match' | sort
periodic-ci-openshift-release-master-ci-4.8-e2e-aws-upgrade-single-node (all) - 3 runs, 100% failed, 100% of failures match = 100% impact
periodic-ci-openshift-release-master-ci-4.8-e2e-azure-upgrade-single-node (all) - 3 runs, 100% failed, 33% of failures match = 33% impact
periodic-ci-openshift-release-master-ci-4.8-upgrade-from-stable-4.7-e2e-aws-uwm (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
periodic-ci-openshift-release-master-ci-4.8-upgrade-from-stable-4.7-e2e-azure-ovn-upgrade (all) - 3 runs, 100% failed, 33% of failures match = 33% impact
periodic-ci-openshift-release-master-ci-4.8-upgrade-from-stable-4.7-e2e-gcp-upgrade (all) - 9 runs, 78% failed, 14% of failures match = 11% impact
So that's... better... Poking at one of the single-node hits :
INFO[2021-12-19T22:40:01Z] Resolved release initial to registry.ci.openshift.org/ocp/release:4.8.0-0.ci-2021-12-10-211525
INFO[2021-12-19T22:40:01Z] Resolved release latest to registry.ci.openshift.org/ocp/release:4.8.0-0.ci-2021-12-11-001048
No idea why they're still running jobs between those older nightlies, but makes sense to me that jobs whose target release doesn't contain the fix will still be impacted. I'll optimistically close CURRENTRELEASE based on the reduction in hit volume, and we'll open a new series or come back to this run if we are bothered by this test-case going forward.