Created attachment 1584221 [details] PrometheusTargetScrapesDuplicate alert throws out in fresh environment Description of problem: PrometheusTargetScrapesDuplicate alert throws out in fresh environment Check prometheus-k8s-0 logs, find such errors # oc -n openshift-monitoring logs -c prometheus prometheus-k8s-0 | grep different level=warn ts=2019-06-25T07:32:20.495Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-monitoring/openshift-apiserver/0 target=https://10.129.0.21:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=9 level=warn ts=2019-06-25T07:32:52.362Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-monitoring/openshift-apiserver/0 target=https://10.130.0.32:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=8 level=warn ts=2019-06-25T07:33:18.110Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.0.170.71:10257/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=22 level=warn ts=2019-06-25T07:33:21.141Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.0.146.123:10257/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=7 level=warn ts=2019-06-25T07:33:44.058Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.0.129.45:10257/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=2 level=warn ts=2019-06-25T07:33:52.373Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-apiserver/openshift-apiserver/0 target=https://10.130.0.32:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=10 level=warn ts=2019-06-25T07:34:10.929Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-monitoring/openshift-apiserver/0 target=https://10.128.0.35:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=8 Version-Release number of selected component (if applicable): 4.2.0-0.nightly-2019-06-24-160709 How reproducible: Always Steps to Reproduce: 1. Check the prometheus alerts 2. 3. Actual results: PrometheusTargetScrapesDuplicate alert throws out in fresh environment Expected results: PrometheusTargetScrapesDuplicate alert should not throw out in fresh environment Additional info: https://github.com/prometheus/prometheus/blob/b98e8188769475cbd4994d2549e4e9c18be97c50/scrape/scrape.go#L1195-L1197
Good catch. This is currently expected as we're in the middle of migrating some things from one repo to another. We'll keep this open as a reminder to finish that up :)
@junqi please retest, the duplicates should be gone now as the kube controller manager service monitor was removed in [1], and moved to [2] [1] https://github.com/openshift/cluster-monitoring-operator/pull/378 [2] https://github.com/openshift/cluster-kube-controller-manager-operator/pull/258
Created attachment 1591896 [details] only watchdog alert in fresh environment
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922