Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1723688

Summary: PrometheusTargetScrapesDuplicate alert throws out in fresh environment
Product: OpenShift Container Platform Reporter: Junqi Zhao <juzhao>
Component: MonitoringAssignee: Sergiusz Urbaniak <surbania>
Status: CLOSED ERRATA QA Contact: Junqi Zhao <juzhao>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.2.0CC: alegrand, anpicker, erooth, mloibl, pkrupa, surbania
Target Milestone: ---Keywords: Regression
Target Release: 4.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-16 06:32:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
PrometheusTargetScrapesDuplicate alert throws out in fresh environment
none
only watchdog alert in fresh environment none

Description Junqi Zhao 2019-06-25 07:36:47 UTC
Created attachment 1584221 [details]
PrometheusTargetScrapesDuplicate alert throws out in fresh environment

Description of problem:
PrometheusTargetScrapesDuplicate alert throws out in fresh environment

Check prometheus-k8s-0 logs, find such errors
# oc -n openshift-monitoring logs -c prometheus prometheus-k8s-0 | grep different
level=warn ts=2019-06-25T07:32:20.495Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-monitoring/openshift-apiserver/0 target=https://10.129.0.21:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=9
level=warn ts=2019-06-25T07:32:52.362Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-monitoring/openshift-apiserver/0 target=https://10.130.0.32:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=8
level=warn ts=2019-06-25T07:33:18.110Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.0.170.71:10257/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=22
level=warn ts=2019-06-25T07:33:21.141Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.0.146.123:10257/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=7
level=warn ts=2019-06-25T07:33:44.058Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-kube-controller-manager/kube-controller-manager/0 target=https://10.0.129.45:10257/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=2
level=warn ts=2019-06-25T07:33:52.373Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-apiserver/openshift-apiserver/0 target=https://10.130.0.32:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=10
level=warn ts=2019-06-25T07:34:10.929Z caller=scrape.go:1199 component="scrape manager" scrape_pool=openshift-monitoring/openshift-apiserver/0 target=https://10.128.0.35:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=8


Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-06-24-160709

How reproducible:
Always

Steps to Reproduce:
1. Check the prometheus alerts
2.
3.

Actual results:
PrometheusTargetScrapesDuplicate alert throws out in fresh environment

Expected results:
PrometheusTargetScrapesDuplicate alert should not throw out in fresh environment

Additional info:
https://github.com/prometheus/prometheus/blob/b98e8188769475cbd4994d2549e4e9c18be97c50/scrape/scrape.go#L1195-L1197

Comment 1 Frederic Branczyk 2019-06-25 07:38:32 UTC
Good catch. This is currently expected as we're in the middle of migrating some things from one repo to another. We'll keep this open as a reminder to finish that up :)

Comment 3 Sergiusz Urbaniak 2019-07-18 10:48:03 UTC
@junqi please retest, the duplicates should be gone now as the kube controller manager service monitor was removed in [1], and moved to [2]

[1] https://github.com/openshift/cluster-monitoring-operator/pull/378
[2] https://github.com/openshift/cluster-kube-controller-manager-operator/pull/258

Comment 6 Junqi Zhao 2019-07-19 02:12:19 UTC
Created attachment 1591896 [details]
only watchdog alert in fresh environment

Comment 7 errata-xmlrpc 2019-10-16 06:32:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922