Bug 2256456
| Summary: | Provide better logging for ocs-metrics-exporter | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | umanga <uchapaga> | |
| Component: | ceph-monitoring | Assignee: | umanga <uchapaga> | |
| Status: | CLOSED ERRATA | QA Contact: | Filip Balák <fbalak> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 4.15 | CC: | kramdoss, odf-bz-bot | |
| Target Milestone: | --- | |||
| Target Release: | ODF 4.15.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | 4.15.0-102 | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2256458 2256459 (view as bug list) | Environment: | ||
| Last Closed: | 2024-03-19 15:30:13 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2256458, 2256459 | |||
|
Description
umanga
2024-01-02 12:42:07 UTC
What is the expected change in logs? How can it be reproduced? You can verify it on any cluster. You should see logs in a json format with easily readable timestamps. Earlier this was in plain text format and not structured. Also, need to verify that there is no regression in generated metrics. Logs are in json format with clear timestamp and message.
There is a periodic error in ocs-metrics-expoter:
{"level":"info","ts":1705074708.5682337,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-0"}
{"level":"info","ts":1705074708.568298,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-1"}
{"level":"info","ts":1705074708.5683098,"caller":"collectors/ceph-cluster.go:142","msg":"Node: compute-2"}
{"level":"info","ts":1705074708.5796983,"caller":"collectors/object-bucket.go:256","msg":"No ObjectBuckets present in the object store ocs-storagecluster-cephobjectstore"}
{"level":"info","ts":1705074724.2763164,"caller":"cache/rbd-mirror.go:296","msg":"RBD mirror store resync started at 2024-01-12 15:52:04.27609856 +0000 UTC m=+24039.608853785"}
{"level":"info","ts":1705074724.276402,"caller":"cache/rbd-mirror.go:321","msg":"RBD mirror store resync ended at 2024-01-12 15:52:04.276389497 +0000 UTC m=+24039.609144710"}
{"level":"error","ts":1705074731.9327457,"caller":"collectors/ceph-block-pool.go:136","msg":"Invalid image health, \"\", for pool ocs-storagecluster-cephblockpool. Must be OK, UNKNOWN, WARNING or ERROR.","stacktrace":"github.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).collectMirroringImageHealth\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:136\ngithub.com/red-hat-storage/ocs-operator/v4/metrics/internal/collectors.(*CephBlockPoolCollector).Collect\n\t/remote-source/app/metrics/internal/collectors/ceph-block-pool.go:81\ngithub.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1\n\t/remote-source/app/vendor/github.com/prometheus/client_golang/prometheus/registry.go:455"}
@umanga Could it be caused by a change introduced in this bug?
Tested with:
ocs 4.15.0-103.stable
It's not due to the changes for this bug. It is an internal error logged for debugging purposes even before this change. It doesn't have user impact. Other regressions were not found. BZ 2258591 was reported based on comment 8 and 9. Moving to VERIFIED based on comments 8 and 9. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383 |