Bug 1803058 - error "label value was collected before with the same name and label values" in node-exporter pod's log
Summary: error "label value was collected before with the same name and label values" ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.4
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.5.0
Assignee: Pawel Krupa
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks: 1846857 1847961
TreeView+ depends on / blocked
 
Reported: 2020-02-14 11:47 UTC by Junqi Zhao
Modified: 2020-07-13 17:15 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1846857 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:15:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift node_exporter pull 58 0 None closed Bug 1803058: text_collectors: rely on virt-what for aws detection 2020-07-01 08:29:41 UTC
Github openshift node_exporter pull 60 0 None closed Bug 1803058: text_collectors: Don't detect AWS twice when virt-what is upgraded 2020-07-01 08:29:41 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:15:58 UTC

Description Junqi Zhao 2020-02-14 11:47:33 UTC
Description of problem:
# oc -n openshift-monitoring get pod -o wide | grep node-exporter
node-exporter-2pzll                           2/2     Running   0          6h38m   10.0.150.249   ip-10-0-150-249.us-east-2.compute.internal   <none>           <none>
node-exporter-4kzsz                           2/2     Running   0          6h39m   10.0.165.90    ip-10-0-165-90.us-east-2.compute.internal    <none>           <none>
node-exporter-f8wrk                           2/2     Running   0          6h39m   10.0.140.210   ip-10-0-140-210.us-east-2.compute.internal   <none>           <none>
node-exporter-hxcxd                           2/2     Running   0          6h39m   10.0.158.193   ip-10-0-158-193.us-east-2.compute.internal   <none>           <none>
node-exporter-lvzlq                           2/2     Running   0          6h38m   10.0.142.187   ip-10-0-142-187.us-east-2.compute.internal   <none>           <none>
node-exporter-vjrkc                           2/2     Running   0          6h38m   10.0.170.37    ip-10-0-170-37.us-east-2.compute.internal    <none>           <none>


Check logs on one node
#  oc -n openshift-monitoring logs node-exporter-4kzsz -c node-exporter | tail -n5
time="2020-02-14T11:04:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"
time="2020-02-14T11:04:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"
time="2020-02-14T11:05:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"
time="2020-02-14T11:05:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"
time="2020-02-14T11:05:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"


#  oc -n openshift-monitoring exec -c node-exporter node-exporter-4kzsz -- cat /var/node_exporter/textfile/virt.prom
# HELP virt_platform reports one series per detected virtualization type. If no type is detected, the type is "none".
# TYPE virt_platform gauge
virt_platform{type="xen"} 1
virt_platform{type="xen-hvm"} 1
virt_platform{type="aws"} 1
virt_platform{type="aws"} 1

debug on this node, since it is a aws node, there is not dmidecode/virt-what command, only execute the AWS code from 
https://github.com/smarterclayton/node_exporter/blob/c3cb4a222abebb488eb645ae687973cda3931e84/text_collectors/virt.sh,
we can see the result is virt_platform{type="aws"} 1

# oc debug node/ip-10-0-165-90.us-east-2.compute.internal 
Starting pod/ip-10-0-165-90us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.165.90
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# dmidecode
sh: dmidecode: command not found
sh-4.4# /usr/sbin/virt-what
sh-4.4#  if ([ -f /sys/hypervisor/uuid ] && [ `head -c 3 /sys/hypervisor/uuid` == "ec2" ]) || ([ -r /sys/devices/virtual/dmi/id/product_uuid ] && [ `head -c 3 /sys/devices/virtual/dmi/id/product_uuid` == "EC2" ]; then count=$(( count + 1 )); echo "count = " $count; echo "virt_platform{type=\"aws\"} 1"; fi
count =  1
virt_platform{type="aws"} 1


Version-Release number of selected component (if applicable):
4.4.0-0.nightly-2020-02-13-212616

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 6 Junqi Zhao 2020-03-25 08:16:10 UTC
Tested with 4.5.0-0.nightly-2020-03-24-202402, issue is fixed
#  oc -n openshift-monitoring logs -c node-exporter node-exporter-2t2cd
time="2020-03-24T23:31:13Z" level=info msg="Starting node_exporter (version=0.18.1, branch=rhaos-4.5-rhel-7, revision=7250ea40d04ce857cf7866956341e12509ec9687)" source="node_exporter.go:156"
time="2020-03-24T23:31:13Z" level=info msg="Build context (go=go1.13.4, user=root@a7e4b5bc069e, date=20200312-12:20:36)" source="node_exporter.go:157"
time="2020-03-24T23:31:13Z" level=info msg="Enabled collectors:" source="node_exporter.go:97"
time="2020-03-24T23:31:13Z" level=info msg=" - arp" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - bcache" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - bonding" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - conntrack" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - cpu" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - cpufreq" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - diskstats" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - edac" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - entropy" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - filefd" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - filesystem" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - infiniband" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - ipvs" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - loadavg" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - mdadm" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - meminfo" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - mountstats" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - netclass" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - netdev" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - netstat" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - nfs" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - nfsd" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - pressure" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - sockstat" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - stat" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - textfile" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - time" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - timex" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - uname" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - vmstat" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - xfs" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg=" - zfs" source="node_exporter.go:104"
time="2020-03-24T23:31:13Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"

# oc -n openshift-monitoring exec -c node-exporter node-exporter-2t2cd -- cat /var/node_exporter/textfile/virt.prom
# HELP virt_platform reports one series per detected virtualization type. If no type is detected, the type is "none".
# TYPE virt_platform gauge
virt_platform{type="kvm"} 1

Comment 8 errata-xmlrpc 2020-07-13 17:15:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.