Bug 1689836

Summary: The OLM metrics should queryable from the Prometheus UI
Product: OpenShift Container Platform Reporter: Jian Zhang <jiazha>
Component: OLMAssignee: Jeff Peeler <jpeeler>
Status: CLOSED ERRATA QA Contact: Jian Zhang <jiazha>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.1.0CC: bandrade, chezhang, chuo, dyan, fbranczy, jfan, scolange, zitang
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-04 10:46:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jian Zhang 2019-03-18 09:24:08 UTC
Description of problem:
The OLM metrics are not queryable from the Prometheus UI. 

Version-Release number of selected component (if applicable):
Cluster version is 4.0.0-0.nightly-2019-03-15-063749
OLM commit:
               io.openshift.build.commit.id=840d806a3b20e5ebb7229631d0168864b1cfed12
               io.openshift.build.commit.url=https://github.com/operator-framework/operator-lifecycle-manager/commit/840d806a3b20e5ebb7229631d0168864b1cfed12
               io.openshift.build.source-location=https://github.com/operator-framework/operator-lifecycle-manager


How reproducible:
always

Steps to Reproduce:
1. Install the OCP 4.0
2. Login the Web console as the kubeadmin user, click "Monitoring"->"Metrics" -> Log in as the kubeadmin user -> query the below metrics:
install_plan_count
subscription_count
catalog_source_count
csv_count 
csv_upgrade_count


Actual results:
Got nothing.

Expected results:
The OLM metrics should queryable from the Prometheus UI.

Additional info:

Comment 3 Jian Zhang 2019-04-04 06:02:00 UTC
Verify failed.
OLM version:  io.openshift.build.commit.id=9ba3512c5406b62179968e2432b284e9a30c321e

1, I didn't find any metrics by following the steps described in the original description.
2, I try to `curl` the metrics port but got nothing. Like below:

[jzhang@dhcp-140-18 444]$ oc get pods
NAME                                READY   STATUS    RESTARTS   AGE
catalog-operator-7db68c98fb-p4nss   1/1     Running   0          165m
olm-operator-5f7cfb8cdc-hljzb       1/1     Running   0          165m
olm-operators-fz8mb                 1/1     Running   0          163m
packageserver-c96b4d7b7-swcnx       1/1     Running   0          162m
packageserver-c96b4d7b7-wvpkh       1/1     Running   0          163m

[jzhang@dhcp-140-18 444]$ oc port-forward catalog-operator-7db68c98fb-p4nss 8081
Forwarding from 127.0.0.1:8081 -> 8081
Forwarding from [::1]:8081 -> 8081
Handling connection for 8081
E0404 13:58:05.171540   25765 portforward.go:331] an error occurred forwarding 8081 -> 8081: error forwarding port 8081 to pod f2856102d5b1107e3b3f4fb6c28d29f185d8149c2a05469c4425068b630be8de, uid : exit status 1: 2019/04/04 05:58:05 socat[12494] E connect(5, AF=2 127.0.0.1:8081, 16): Connection refused

[jzhang@dhcp-140-18 ~]$ curl -k http://localhost:8081/metrics -v
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8081 (#0)
> GET /metrics HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.59.0
> Accept: */*
> 
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server

^C[jzhang@dhcp-140-18 444]$ oc port-forward olm-operator-5f7cfb8cdc-hljzb 8081
Forwarding from 127.0.0.1:8081 -> 8081
Forwarding from [::1]:8081 -> 8081
Handling connection for 8081
E0404 14:01:10.845859   25782 portforward.go:331] an error occurred forwarding 8081 -> 8081: error forwarding port 8081 to pod 9f9967270f5ae39ffe5dd59369fe9c55a60354d28df1dd77643d500ef2a47a21, uid : exit status 1: 2019/04/04 06:01:10 socat[12363] E connect(5, AF=2 127.0.0.1:8081, 16): Connection refused
[jzhang@dhcp-140-18 ~]$ curl -k http://localhost:8081/metrics -v
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8081 (#0)
> GET /metrics HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.59.0
> Accept: */*
> 
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server

Comment 4 Jeff Peeler 2019-04-08 20:16:31 UTC
The implementation has changed since the first metrics addition, so curling an http endpoint is not going to work. Shouldn't the test be more targeted at looking for the applicable OLM metrics in prometheus since that was the purpose of the latest change? If you want to do a sanity check to verify metrics are being served at the pod level, curling over https is what you need to do.

I assume that port forward attempt was unsuccessful given that you got a connection refused error message. I'd try to forward like `oc port-forward -p mypod :8081` just to ensure that there's not a local port conflict in play here.

Comment 5 Jian Zhang 2019-04-10 08:43:38 UTC
Jeff,

Thanks for your information. I login the Web console, and still get the same errors, as below: 
Click "Monitoring"->"Metrics" -> "Login as the Openshift"->"Status"->"Targets":

The status of the "openshift-operator-lifecycle-manager/catalog-operator" and "openshift-operator-lifecycle-manager/olm-operator" are DOWN. Errors:
Get https://10.128.0.3:8081/metrics: dial tcp 10.128.0.3:8081: connect: connection refused
Get https://10.128.0.7:8081/metrics: dial tcp 10.128.0.7:8081: connect: connection refused

> If you want to do a sanity check to verify metrics are being served at the pod level, curling over https is what you need to do.
Thanks! Got it, but seems like the olm-operator/catalog-operator cannot handle the SSL connect. Like below:
[jzhang@dhcp-140-18 ocp410]$ oc port-forward olm-operator-7bd6c84b68-5tkgm 8081
Forwarding from 127.0.0.1:8081 -> 8081
Forwarding from [::1]:8081 -> 8081
Handling connection for 8081
E0410 16:38:56.209774    3743 portforward.go:331] an error occurred forwarding 8081 -> 8081: error forwarding port 8081 to pod 03e2e7c8663cd342e5dcc82e23785bf75a1cfc63633bb851b79ea73fcd571d24, uid : exit status 1: 2019/04/10 08:38:56 socat[31905] E connect(5, AF=2 127.0.0.1:8081, 16): Connection refused
Handling connection for 8081

[jzhang@dhcp-140-18 ~]$ curl -k https://localhost:8081/metrics -v
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8081 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* ignoring certificate verify locations due to disabled peer verification
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:8081 
* stopped the pause stream!
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:8081 

> I'd try to forward like `oc port-forward -p mypod :8081` just to ensure that there's not a local port conflict in play here.

Thanks for your suggestion. But, no `-p` flag for my `oc port-forward`. Maybe I need to update my `oc` client.
[jzhang@dhcp-140-18 ocp410]$ oc port-forward -p catalog-operator-6b65b948bf-8fx58 :8081
Error: unknown shorthand flag: 'p' in -p


Usage:
  oc port-forward TYPE/NAME [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]

Examples:
  # Listens on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
  oc port-forward mypod 5000 6000
  
  # Listens on port 8888 locally, forwarding to 5000 in the pod
  oc port-forward mypod 8888:5000
  
  # Listens on a random port locally, forwarding to 5000 in the pod
  oc port-forward mypod :5000
  
  # Listens on a random port locally, forwarding to 5000 in the pod
  oc port-forward mypod 0:5000

Options:
      --pod-running-timeout=1m0s: The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running

Use "oc options" for a list of global command-line options (applies to all commands).

[jzhang@dhcp-140-18 ocp410]$ oc version
oc v4.0.0-0.177.0
kubernetes v1.12.4+6a9f178753
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://api.jian-410.qe.devcluster.openshift.com:6443
kubernetes v1.12.4+0ba401e

Comment 6 Jian Zhang 2019-04-12 02:34:37 UTC
*** Bug 1698530 has been marked as a duplicate of this bug. ***

Comment 7 Jian Zhang 2019-04-12 02:36:17 UTC
*** Bug 1698533 has been marked as a duplicate of this bug. ***

Comment 8 Frederic Branczyk 2019-04-12 18:39:10 UTC
As https://bugzilla.redhat.com/show_bug.cgi?id=1698530 is closed as a dupe of this, I just want to make sure that the expectation is that both olm-operator and catalog-operator targets are shown as UP in Prometheus when this bug is verified. Thanks :)

Comment 9 Jeff Peeler 2019-04-12 19:06:15 UTC
The PR to fix this issue is here:
https://github.com/operator-framework/operator-lifecycle-manager/pull/809

Will set to modified once it merges.

I too encourage QE to test using Prometheus itself for final verification.

Comment 10 Jian Zhang 2019-04-19 06:03:21 UTC
It works well, LGTM, verify it. Details as below:

Cluster version is 4.1.0-0.nightly-2019-04-18-210657
OLM version info:
               io.openshift.build.commit.id=c718ec855bb26a111d66ba2ba193d30e54f7feb1
               io.openshift.build.commit.url=https://github.com/operator-framework/operator-lifecycle-manager/commit/c718ec855bb26a111d66ba2ba193d30e54f7feb1
               io.openshift.build.source-location=https://github.com/operator-framework/operator-lifecycle-manager

1, Log in the cluster as the kubeadmin user on the Web console.
2, Click "Monitoring"->"Metrics" -> "Login as the Openshift"->"Status"->"Targets", both olm-operator and catalog-operator targets are shown as UP in Prometheus:
openshift-operator-lifecycle-manager/catalog-operator/0 (1/1 up)
openshift-operator-lifecycle-manager/olm-operator/0 (1/1 up)

3, Get the below metrics successfully:
install_plan_count
subscription_count
catalog_source_count
csv_count 
csv_upgrade_count


In the back end:

1, mac:OCP-21082 jianzhang$ oc get pods
NAME                                READY     STATUS    RESTARTS   AGE
catalog-operator-854d6b45dc-q97z6   1/1       Running   0          153m
olm-operator-78dff998fd-lx8d4       1/1       Running   0          153m
olm-operators-85grh                 1/1       Running   0          151m
packageserver-575f9f6d44-cw75n      1/1       Running   0          150m
packageserver-575f9f6d44-lljt4      1/1       Running   0          150m

2, mac:OCP-21082 jianzhang$ oc port-forward olm-operator-78dff998fd-lx8d4 8081
Forwarding from 127.0.0.1:8081 -> 8081
Forwarding from [::1]:8081 -> 8081
Handling connection for 8081

3, mac:OCP-21082 jianzhang$ curl -k https://localhost:8081/metrics 
# HELP csv_count Number of CSVs successfully registered
# TYPE csv_count gauge
csv_count 51.0
# HELP csv_upgrade_count Monotonic count of CSV upgrades
# TYPE csv_upgrade_count counter
csv_upgrade_count 0.0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0.0"} 5.2359e-05
go_gc_duration_seconds{quantile="0.25"} 8.8557e-05
go_gc_duration_seconds{quantile="0.5"} 9.9791e-05
go_gc_duration_seconds{quantile="0.75"} 0.000127908
go_gc_duration_seconds{quantile="1.0"} 0.003165668
go_gc_duration_seconds_sum 0.03368277
go_gc_duration_seconds_count 217.0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 171.0
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.10.8"} 1.0
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.3366144e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 4.06155036e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 2.045796e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 2.73973e+07
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 5.048383615402468e-05
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.510848e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.3366144e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 1.5310848e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.796416e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 175156.0
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 0.0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 6.3275008e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.5556532818365376e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 8408.0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 2.7572456e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 6944.0
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384.0
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 684304.0
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 770048.0
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 5.4834432e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.012116e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.736704e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.736704e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 7.1366904e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 16.0
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 54.88
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 14.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 9.0324992e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.55564411169e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.01146624e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1.0
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1.0
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 572.0
promhttp_metric_handler_requests_total{code="500"} 0.0
promhttp_metric_handler_requests_total{code="503"} 0.0

^Cmac:OCP-21082 jianzhang$ oc port-forward catalog-operator-854d6b45dc-q97z6  8081
Forwarding from 127.0.0.1:8081 -> 8081
Forwarding from [::1]:8081 -> 8081
Handling connection for 8081

mac:OCP-21082 jianzhang$ curl -k https://localhost:8081/metrics 
# HELP catalog_source_count Number of catalog sources
# TYPE catalog_source_count gauge
catalog_source_count 7.0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0.0"} 5.3037e-05
go_gc_duration_seconds{quantile="0.25"} 8.752e-05
go_gc_duration_seconds{quantile="0.5"} 0.000111875
go_gc_duration_seconds{quantile="0.75"} 0.000148895
go_gc_duration_seconds{quantile="1.0"} 0.054026194
go_gc_duration_seconds_sum 0.078995425
go_gc_duration_seconds_count 156.0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 478.0
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.10.8"} 1.0
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.9977224e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.070520992e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.691945e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.702911e+07
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 4.641831346881468e-05
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 1.896448e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.9977224e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 4.734976e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.4384256e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 235974.0
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 0.0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 4.9119232e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.5556533728963435e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 9878.0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.7265084e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 6944.0
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384.0
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 475760.0
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 507904.0
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.250624e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 591823.0
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 3.309568e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 3.309568e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 5.7133304e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 16.0
# HELP install_plan_count Number of install plans
# TYPE install_plan_count gauge
install_plan_count 5.0
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 47.73
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 94.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 7.4395648e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.55564411186e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 8.675328e+07
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1.0
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1.0
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 580.0
promhttp_metric_handler_requests_total{code="500"} 0.0
promhttp_metric_handler_requests_total{code="503"} 0.0
# HELP subscription_count Number of subscriptions
# TYPE subscription_count gauge
subscription_count 5.0

Comment 12 errata-xmlrpc 2019-06-04 10:46:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758