Description of problem (please be detailed as possible and provide log snippests): Logs of ocs-metrics-expoter pods contain following error printed periodically: {"level":"info","ts":1705074708.5682337,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-0"} {"level":"info","ts":1705074708.568298,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-1"} {"level":"info","ts":1705074708.5683098,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-2"} {"level":"info","ts":1705074708.5796983,"caller":"collectors/object-bucket.go:256","msg":"No ObjectBuckets present in the object store ocs-storagecluster-cephobjectstore"} {"level":"info","ts":1705074724.2763164,"caller":"cache/rbd-mirror.go:296","msg":"RBD mirror store resync started at 2024-01-12 15:52:04.27609856 +0000 UTC m=+24039.608853785"} {"level":"info","ts":1705074724.276402,"caller":"cache/rbd-mirror.go:321","msg":"RBD mirror store resync ended at 2024-01-12 15:52:04.276389497 +0000 UTC m=+24039.609144710"} {"level":"error","ts":1705074731.9327457,"caller":"collectors/ceph-block-pool.go:136","msg":"Invalid image health, \"\", for pool ocs-storagecluster-cephblockpool. Must be OK, UNKNOWN, WARNING or ERROR.","stacktrace":"github.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).collectMirroringImageHealth\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:136\ngithub.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).Collect\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:81\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\t/remote-source/app/vendor/github.com/prometheus/client_golang/prometheus/registry.go:455"} Version of all relevant components (if applicable): ocs 4.15.0-103.stable Can this issue reproduce from the UI? yes Steps to Reproduce: 1. Check logs of ocs-metrics-expoter-* pod Actual results: There is an error message: "Invalid image health, \"\", for pool ocs-storagecluster-cephblockpool. Must be OK, UNKNOWN, WARNING or ERROR.","stacktrace":"github.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).collectMirroringImageHealth\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:136\ngithub.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).Collect\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:81\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\t/remote-source/app/vendor/github.com/prometheus/client_golang/prometheus/registry.go:455" Expected results: There should be no message with error log level. Additional info:
The error log is removed. --> VERIFIED Tested with odf 4.15.0-134
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383