Bug 2181535 - [GSS] Object storage in degraded state
Summary: [GSS] Object storage in degraded state
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.11
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: ODF 4.13.0
Assignee: Utkarsh Srivastava
QA Contact: Tiffany Nguyen
URL:
Whiteboard:
Depends On:
Blocks: 2154341 2186482
TreeView+ depends on / blocked
 
Reported: 2023-03-24 13:00 UTC by Manjunatha
Modified: 2023-08-09 16:49 UTC (History)
16 users (show)

Fixed In Version: 4.13.0-172
Doc Type: Bug Fix
Doc Text:
Previously, non-optimized database related flows on deletions caused Multicloud Object Gateway to spike in CPU usage and perform slowly on mass delete scenarios. For example, reclaiming a deleted object bucket claim (OBC). With this fix, indexes for the bucket reclaimer process are optimized, a new index is added to the database to speed up the database cleaner flows, and bucket reclaimer changes are introduced to work on batches of objects.
Clone Of:
Environment:
Last Closed: 2023-06-21 15:25:01 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-core pull 7262 0 None Merged Optimize NooBaa deletion paths 2023-04-23 07:50:14 UTC
Github noobaa noobaa-core pull 7277 0 None Merged [Backport to 5.13] improve performance and fix lifecycle BG 2023-07-09 06:54:26 UTC
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:25:26 UTC

Description Manjunatha 2023-03-24 13:00:36 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Object storage in a degraded state and customer is unable to access the buckets using "s3" command.
When I checked noobaa-default-bucket-class is in rejected state its using backingstore noobaa-pv-backing-store, which is in "ALL_NODES_OFFLINE" state , This backingstore is created on rbd PV and that PV  looks good

History of the issue: Cluster becomes full(above 80%) so we deleted the unwanted PV's to get the free space after this, issue with object storage started. 

Version of all relevant components (if applicable):
odf-operator.v4.11.5

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
yes, unable to access the object storage. 

Is there any workaround available to the best of your knowledge?
No

Can this issue reproducible?
not sure

Additional info:
Latest ODF mustgather in supportshell 
path: /cases/03468361/0050-must-gather-odf-24032023.tar.gz.gz

Comment 13 pollenbu 2023-03-28 08:29:26 UTC
Is there any update to this case?

Comment 39 Tiffany Nguyen 2023-05-12 17:27:29 UTC
Verified with ODF 4.13 build 4.13.0-186, increase noobaa pods resources, then upload and list 1M objects without any issues.
Delete obc and backingstore are successfully.

$ oc get storagecluster -n openshift-storage ocs-storagecluster -oyaml | yq '.spec.resources'

mgr:
  limits:
    cpu: "3"
    memory: 3Gi
  requests:
    cpu: "3"
    memory: 3Gi
noobaa-core:
  limits:
    cpu: "3"
    memory: 4Gi
  requests:
    cpu: "3"
    memory: 4Gi
noobaa-db:
  limits:
    cpu: "3"
    memory: 4Gi
  requests:
    cpu: "3"
    memory: 4Gi
noobaa-endpoint:
  limits:
    cpu: "3"
    memory: 4Gi
  requests:
    cpu: "3"
    memory: 4Gi

Comment 42 errata-xmlrpc 2023-06-21 15:25:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742


Note You need to log in before you can comment on or make changes to this bug.