+++ This bug was initially created as a clone of Bug #2255586 +++ Description of problem (please be detailed as possible and provide log snippests): Noobaa showing error backingstore "noobaa-default-backing-store-noobaa-noobaa" does not exist and excessive message logging time="2023-12-19T15:28:23Z" level=info msg="Create event detected for rook-ceph-osd-token-t292s (openshift-storage), queuing Reconcile" time="2023-12-19T15:28:23Z" level=info msg="checking which namespaceStores to reconcile. mapping secret openshift-storage/rook-ceph-osd-token-t292s to namespaceStores" time="2023-12-19T15:28:23Z" level=info msg="will reconcile these backingstores: []" time="2023-12-19T15:28:23Z" level=info msg="Create event detected for ocs-metrics-exporter-dockercfg-7c74p (openshift-storage), queuing Reconcile" time="2023-12-19T15:28:23Z" level=info msg="checking which backingstore to reconcile. mapping secret openshift-storage/ocs-metrics-exporter-dockercfg-7c74p to backingstores" time="2023-12-19T15:28:23Z" level=info msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n" time="2023-12-19T15:28:23Z" level=info msg="SetPhase: Verifying" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-19T15:28:23Z" level=info msg="SetPhase: Connecting" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-19T15:28:23Z" level=info msg="will reconcile these namespaceStores: []" time="2023-12-19T15:28:23Z" level=info msg="Create event detected for noobaa-db-token-mg6j5 (openshift-storage), queuing Reconcile" time="2023-12-19T15:28:23Z" level=info msg="checking which namespaceStores to reconcile. mapping secret openshift-storage/noobaa-db-token-mg6j5 to namespaceStores" -- time="2023-12-19T15:28:25Z" level=info msg="will reconcile these backingstores: []" time="2023-12-19T15:28:25Z" level=info msg="Create event detected for rook-ceph-osd-dockercfg-9pw84 (openshift-storage), queuing Reconcile" time="2023-12-19T15:28:25Z" level=info msg="checking which backingstore to reconcile. mapping secret openshift-storage/rook-ceph-osd-dockercfg-9pw84 to backingstores" time="2023-12-19T15:28:25Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-19T15:28:25Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-19T15:28:25Z" level=info msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n" time="2023-12-19T15:28:25Z" level=info msg="SetPhase: Verifying" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-19T15:28:25Z" level=info msg="SetPhase: Connecting" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-19T15:28:25Z" level=info msg="will reconcile these backingstores: []" time="2023-12-19T15:28:25Z" level=info msg="Create event detected for csi-addons-controller-manager-service-cert (openshift-storage), queuing Reconcile" time="2023-12-19T15:28:25Z" level=info msg="checking which backingstore to reconcile. mapping secret openshift-storage/csi-addons-controller-manager-service-cert to backingstores" -- time="2023-12-19T15:28:26Z" level=info msg="will reconcile these backingstores: []" time="2023-12-19T15:28:26Z" level=info msg="Create event detected for rook-ceph-crash-collector-keyring (openshift-storage), queuing Reconcile" time="2023-12-19T15:28:26Z" level=info msg="checking which backingstore to reconcile. mapping secret openshift-storage/rook-ceph-crash-collector-keyring to backingstores" time="2023-12-19T15:28:26Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-19T15:28:26Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-19T15:28:26Z" level=info msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n" time="2023-12-19T15:28:26Z" level=info msg="SetPhase: Verifying" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-19T15:28:26Z" level=info msg="SetPhase: Connecting" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-19T15:28:26Z" level=info msg="will reconcile all backingstores: [openshift-storage/noobaa-default-backing-store]" time="2023-12-19T15:28:26Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-19T15:28:26Z" level=info msg="✅ Exists: NooBaa \"noobaa\"\n" --- many reconcile messages that create very large Noobaa logs time="2023-12-19T15:28:26Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-19T15:28:26Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-19T15:28:26Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-19T15:28:27Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-19T15:28:27Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-19T15:28:35Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-19T15:28:35Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" ----- Version of all relevant components (if applicable): ODF 4.13.3 and also ODF 4.14 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Continuous error messages and large size noobaa logs Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? N/A Can this issue reproducible? Yes, in the ODF 4.14 lab it is showing the same behaviour EMEA ODF 4.14 lab EMEA-odf414 % grep "checking which backingstore to reconcile" noobaa-operator-57b569bdcf-jwvbj-21dec.log|more time="2023-12-21T09:07:37Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:37Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:47Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:47Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:56Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:56Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:57Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:57Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:57Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:57Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" time="2023-12-21T09:07:58Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" EMEA-odf414 % grep -C5 noobaa-default-backing-store-noobaa-noobaa noobaa-operator-57b569bdcf-jwvbj-21dec.log|more time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: NooBaa \"noobaa\"\n" time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: BackingStore \"noobaa-default-backing-store\"\n" time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:37Z" level=info msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n" time="2023-12-21T09:07:37Z" level=info msg="SetPhase: Verifying" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-21T09:07:37Z" level=info msg="SetPhase: Connecting" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: NooBaa \"noobaa\"\n" time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: Service \"noobaa-mgmt\"\n" time="2023-12-21T09:07:37Z" level=info msg="✅ Exists: Secret \"noobaa-operator\"\n" -- time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: NooBaa \"noobaa\"\n" time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: BackingStore \"noobaa-default-backing-store\"\n" time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:47Z" level=info msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n" time="2023-12-21T09:07:47Z" level=info msg="SetPhase: Verifying" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-21T09:07:47Z" level=info msg="SetPhase: Connecting" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: NooBaa \"noobaa\"\n" time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: Service \"noobaa-mgmt\"\n" time="2023-12-21T09:07:47Z" level=info msg="✅ Exists: Secret \"noobaa-operator\"\n" -- time="2023-12-21T09:07:56Z" level=info msg="setKMSConditionStatus Sync" sys=openshift-storage/noobaa time="2023-12-21T09:07:56Z" level=info msg="ReconcileKeyRotation, KMS Starting" sys=openshift-storage/noobaa time="2023-12-21T09:07:56Z" level=info msg="ReconcileKeyRotation, KMS skip reconcile, single master root mode" sys=openshift-storage/noobaa time="2023-12-21T09:07:56Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:56Z" level=info msg="✅ Exists: Secret \"rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user\"\n" time="2023-12-21T09:07:56Z" level=info msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n" time="2023-12-21T09:07:56Z" level=info msg="SetPhase: Verifying" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-21T09:07:56Z" level=info msg="SetPhase: Connecting" backingstore=openshift-storage/noobaa-default-backing-store time="2023-12-21T09:07:56Z" level=info msg="✅ Updated: SecurityContextConstraints \"noobaa-db\"\n" time="2023-12-21T09:07:56Z" level=info msg="ReconcileObject: Done - unchanged ServiceAccount noobaa-db " sys=openshift-storage/noobaa time="2023-12-21T09:07:56Z" level=info msg="ReconcileObject: Done - unchanged Role noobaa-db " sys=openshift-storage/noobaa Can this issue reproduce from the UI? N/A If this is a regression, please provide more details to justify this: N/A Steps to Reproduce: ODF 4.14 lab has been created and Noobaa shows the above messages Actual results: - backingstore noobaa-default-backing-store-noobaa-noobaa not found - Very large Noobaa logs due to many reconcile messages Expected results: - backingstore noobaa-default-backing-store-noobaa-noobaa not found - Normal size Noobaa logs Additional info: case # 03681098 --- Additional comment from RHEL Program Management on 2023-12-22 07:55:34 UTC --- This bug having no release flag set previously, is now set with release flag 'odf‑4.15.0' to '?', and so is being proposed to be fixed at the ODF 4.15.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag. --- Additional comment from on 2023-12-22 07:57:26 UTC --- Correction: Expected results: - should not have this error message: "backingstore noobaa-default-backing-store-noobaa-noobaa not found" - Normal size Noobaa logs --- Additional comment from on 2024-01-02 06:59:25 UTC --- Hi, Can you please update regarding this BZ? Thanks --- Additional comment from on 2024-01-04 06:49:04 UTC --- Hi, Can you please update regarding this BZ? Thanks --- Additional comment from on 2024-01-08 15:01:50 UTC --- Hi, Can you please update regarding this BZ? Thanks --- Additional comment from Liran Mauda on 2024-01-09 07:20:35 UTC --- hi @tochan Can you provide more details? - what is the system status? - what are you trying to do? Please also provide ODF mast-gather Best Regards, Liran. --- Additional comment from on 2024-01-09 07:53:25 UTC --- Hi, Customer is seeing this kind of error messages: "time="2023-12-19T15:28:26Z" level=info msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n"" Also, there are too many logged messages like the one below: Example: time="2023-12-21T09:07:37Z" level=info msg="checking which backingstore to reconcile. mapping Noobaa openshift-storage/noobaa to backingstores" Must gather is 400 MB size, I can't attach it to the BZ. Do you have access to supportshell? Thanks. --- Additional comment from Liran Mauda on 2024-01-09 08:50:26 UTC --- Hi, Can you provide the noobaa part and the directory in the must gather of namespace -> openshift-storage also, can you describe the cluster state, and the flow leads to it? Best Regards, Liran. --- Additional comment from on 2024-01-09 09:22:53 UTC --- Hi, Mustgather is in supportshell: path: 03681098/0060-must-gather-odf-noobaa-operator-log.tar.gz path to openshift-storage namespace inside MG: 03681098/0060-must-gather-odf-noobaa-operator-log.tar.gz/registry-redhat-io-odf4-odf-must-gather-rhel9-sha256-4981ff842cfcd6f839db71afae12f254ba4edb48cbec997fbbaeb7b6a968b765/namespaces/openshift-storage Noobaa operator log with the errors: 0050-noobaa-operator-6cb667559d-kww45.log Thanks --- Additional comment from Liran Mauda on 2024-01-10 11:20:15 UTC --- Hi Tomas, looking at the must-gather we can see the log: `msg="❌ Not Found: \"noobaa-default-backing-store-noobaa-noobaa\"\n"` This is a bug where we try to check a nonexisting kubernetes object (there is no such object "noobaa-default-backing-store-noobaa-noobaa"). This issue is not affecting NooBaa and as per the Must-gather, NooBaa is working properly. as for `NooBaa operator in a constant cycle of reconciling default backing store` : NooBaa operator is in a constant cycle of reconciling as designed, it is the way all operator works, they watch for changes and act upon them, the fact that they see a message in the logs does not mean NooBaa did something upon a change. The noobaa logs are not over-flooding in size, as you can see in Must Gather, the size of noobaa-operator log is 18M. We are planning to find and fix the `no such object "noobaa-default-backing-store-noobaa-noobaa"` issue. Do you need something more? Best Regards, Liran. --- Additional comment from RHEL Program Management on 2024-01-21 21:50:59 UTC --- This BZ is being approved for ODF 4.15.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.15.0 --- Additional comment from RHEL Program Management on 2024-01-21 21:50:59 UTC --- Since this bug has been approved for ODF 4.15.0 release, through release flag 'odf-4.15.0+', the Target Release is being set to 'ODF 4.15.0
Please backport the fix to 4.14
Hi Alexander, We have below bz which is clone of original bz https://bugzilla.redhat.com/show_bug.cgi?id=2260852 Thanks, Uday
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.14.10 Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:6398