Bug 2229863 - [RDR] Noobaa operator restarts multiple times on RDR longevity setup
Summary: [RDR] Noobaa operator restarts multiple times on RDR longevity setup
Keywords:
Status: CLOSED DUPLICATE of bug 2216401
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Nimrod Becker
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-08 04:54 UTC by kmanohar
Modified: 2023-08-09 16:49 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-09 08:56:28 UTC
Embargoed:


Attachments (Terms of Use)

Description kmanohar 2023-08-08 04:54:06 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Observing noobaa restarts multiple times on RDR Longevity setup.

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Keep the RDR cluster with replication going on for a longer duration(in this case 2 months)
2. Noobaa operators restarts multiple times (86 times on c1, 75 times on c2)

Log message in noobaa operator pod

time="2023-08-02T07:03:09Z" level=info msg="❌ Not Found:  \"noobaa-default-backing-store-noobaa-noobaa\"\n"

oc get backingstore -n openshift-storage
NAME                           TYPE            PHASE   AGE
noobaa-default-backing-store   s3-compatible   Ready   55d


Actual results:
Too many restarts are not expected

Expected results:

Additional info:
1) No RDR operations were performed.

Must-gather logs:-

c1 - http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/keerthana/Longevity/ceph-mon-restart/c1/

c2 - http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/keerthana/Longevity/ceph-mon-restart/c2/

hub - http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/keerthana/Longevity/ceph-mon-restart/hub/

Live cluster is avaiable for debugging

hub - https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/25311/

c1 - https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/25313/

c2 - https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/25312/


Version details

ODF- 4.13.0-219
OCP - 4.13.0-0.nightly-2023-06-05-164816
ACM - 2.8
Submariner - 0.15.1
MCO - 4.13.0-219
ceph - ceph version 17.2.6-70.0.TEST.bz2119217.el9cp (6d74fefa15d1216867d1d112b47bb83c4913d28f) quincy (stable)

Comment 2 Nimrod Becker 2023-08-09 08:56:28 UTC
Please verify with 4.13.1, this issue is the same as https://bugzilla.redhat.com/show_bug.cgi?id=2216401

*** This bug has been marked as a duplicate of bug 2216401 ***


Note You need to log in before you can comment on or make changes to this bug.