Bug 2256456 - Provide better logging for ocs-metrics-exporter
Summary: Provide better logging for ocs-metrics-exporter
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph-monitoring
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ODF 4.15.0
Assignee: umanga
QA Contact: Filip Balák
URL:
Whiteboard:
Depends On:
Blocks: 2256458 2256459
TreeView+ depends on / blocked
 
Reported: 2024-01-02 12:42 UTC by umanga
Modified: 2024-03-19 15:30 UTC (History)
2 users (show)

Fixed In Version: 4.15.0-102
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2256458 2256459 (view as bug list)
Environment:
Last Closed: 2024-03-19 15:30:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2356 0 None open Bug 2256456: [release-4.15] Enhanced logger for ocs-metrics-exporter 2024-01-02 13:47:01 UTC
Red Hat Bugzilla 2258591 0 unspecified CLOSED Error 'Invalid image health, "", for pool ocs-storagecluster-cephblockpool' in ocs-metrics-expoter logs 2024-03-19 15:31:29 UTC
Red Hat Product Errata RHSA-2024:1383 0 None None None 2024-03-19 15:30:25 UTC

Description umanga 2024-01-02 12:42:07 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ocs-metrics-exporter logs are not good enough to identify some of the internal errors and the log format is not structured.

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:
Not a regression.

Steps to Reproduce:
1.
2.
3.


Actual results: Logs do not have enough information about internal errors.


Expected results: Logs should have enough information about internal errors.


Additional info:

Comment 6 Filip Balák 2024-01-03 08:21:40 UTC
What is the expected change in logs? How can it be reproduced?

Comment 7 umanga 2024-01-05 05:20:00 UTC
You can verify it on any cluster. You should see logs in a json format with easily readable timestamps.
Earlier this was in plain text format and not structured.

Also, need to verify that there is no regression in generated metrics.

Comment 8 Filip Balák 2024-01-12 15:56:46 UTC
Logs are in json format with clear timestamp and message.
There is a periodic error in ocs-metrics-expoter:

{"level":"info","ts":1705074708.5682337,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-0"}
{"level":"info","ts":1705074708.568298,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-1"}
{"level":"info","ts":1705074708.5683098,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-2"}
{"level":"info","ts":1705074708.5796983,"caller":"collectors/object-bucket.go:256","msg":"No ObjectBuckets present in the object store ocs-storagecluster-cephobjectstore"}
{"level":"info","ts":1705074724.2763164,"caller":"cache/rbd-mirror.go:296","msg":"RBD mirror store resync started at 2024-01-12 15:52:04.27609856 +0000 UTC m=+24039.608853785"}
{"level":"info","ts":1705074724.276402,"caller":"cache/rbd-mirror.go:321","msg":"RBD mirror store resync ended at 2024-01-12 15:52:04.276389497 +0000 UTC m=+24039.609144710"}
{"level":"error","ts":1705074731.9327457,"caller":"collectors/ceph-block-pool.go:136","msg":"Invalid image health, \"\", for pool ocs-storagecluster-cephblockpool. Must be OK, UNKNOWN, WARNING or ERROR.","stacktrace":"github.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).collectMirroringImageHealth\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:136\ngithub.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).Collect\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:81\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\t/remote-source/app/vendor/github.com/prometheus/client_golang/prometheus/registry.go:455"}

@umanga Could it be caused by a change introduced in this bug?

Tested with:
ocs 4.15.0-103.stable

Comment 9 umanga 2024-01-16 05:45:24 UTC
It's not due to the changes for this bug.
It is an internal error logged for debugging purposes even before this change. It doesn't have user impact.

Comment 10 Filip Balák 2024-01-16 12:43:34 UTC
Other regressions were not found. BZ 2258591 was reported based on comment 8 and 9.
Moving to VERIFIED based on comments 8 and 9.

Comment 11 errata-xmlrpc 2024-03-19 15:30:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383


Note You need to log in before you can comment on or make changes to this bug.