Description of problem (please be detailed as possible and provide log snippests): ocs-metrics-exporter logs are not good enough to identify some of the internal errors and the log format is not structured. Version of all relevant components (if applicable): Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Not a regression. Steps to Reproduce: 1. 2. 3. Actual results: Logs do not have enough information about internal errors. Expected results: Logs should have enough information about internal errors. Additional info:
What is the expected change in logs? How can it be reproduced?
You can verify it on any cluster. You should see logs in a json format with easily readable timestamps. Earlier this was in plain text format and not structured. Also, need to verify that there is no regression in generated metrics.
Logs are in json format with clear timestamp and message. There is a periodic error in ocs-metrics-expoter: {"level":"info","ts":1705074708.5682337,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-0"} {"level":"info","ts":1705074708.568298,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-1"} {"level":"info","ts":1705074708.5683098,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-2"} {"level":"info","ts":1705074708.5796983,"caller":"collectors/object-bucket.go:256","msg":"No ObjectBuckets present in the object store ocs-storagecluster-cephobjectstore"} {"level":"info","ts":1705074724.2763164,"caller":"cache/rbd-mirror.go:296","msg":"RBD mirror store resync started at 2024-01-12 15:52:04.27609856 +0000 UTC m=+24039.608853785"} {"level":"info","ts":1705074724.276402,"caller":"cache/rbd-mirror.go:321","msg":"RBD mirror store resync ended at 2024-01-12 15:52:04.276389497 +0000 UTC m=+24039.609144710"} {"level":"error","ts":1705074731.9327457,"caller":"collectors/ceph-block-pool.go:136","msg":"Invalid image health, \"\", for pool ocs-storagecluster-cephblockpool. Must be OK, UNKNOWN, WARNING or ERROR.","stacktrace":"github.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).collectMirroringImageHealth\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:136\ngithub.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).Collect\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:81\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\t/remote-source/app/vendor/github.com/prometheus/client_golang/prometheus/registry.go:455"} @umanga Could it be caused by a change introduced in this bug? Tested with: ocs 4.15.0-103.stable
It's not due to the changes for this bug. It is an internal error logged for debugging purposes even before this change. It doesn't have user impact.
Other regressions were not found. BZ 2258591 was reported based on comment 8 and 9. Moving to VERIFIED based on comments 8 and 9.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383