Bug 2165608 - [RDR][CEPHFS] Ceph reports 1 large omap objects
Summary: [RDR][CEPHFS] Ceph reports 1 large omap objects
Keywords:
Status: CLOSED DUPLICATE of bug 2120944
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Venky Shankar
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-01-30 14:42 UTC by Pratik Surve
Modified: 2023-08-09 16:37 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-22 02:38:38 UTC
Embargoed:


Attachments (Terms of Use)

Description Pratik Surve 2023-01-30 14:42:04 UTC
Description of problem (please be detailed as possible and provide log
snippests):

[RDR][CEPHFS] Ceph reports 1 large omap objects

$ ceph health detail
HEALTH_WARN 1 large omap objects; 39 daemons have recently crashed
[WRN] LARGE_OMAP_OBJECTS: 1 large omap objects
    1 large objects found in pool 'ocs-storagecluster-cephfilesystem-metadata'
    Search the cluster log for 'Large omap object found' for more details.


Version of all relevant components (if applicable):
OCP version:- 4.12.0-0.nightly-2023-01-19-110743
ODF version:- 4.12.0-167
CEPH version:- ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable)
ACM version:- v2.7.0
SUBMARINER version:- v0.14.1
VOLSYNC version:- volsync-product.v0.6.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Deploy RDR cluster 
2.Deploy CEPHFS workload
3. check ceph status after 3-4 days


Actual results:
$ ceph health detail
HEALTH_WARN 1 large omap objects; 39 daemons have recently crashed
[WRN] LARGE_OMAP_OBJECTS: 1 large omap objects
    1 large objects found in pool 'ocs-storagecluster-cephfilesystem-metadata'
    Search the cluster log for 'Large omap object found' for more details.

$ ceph -s
  cluster:
    id:     87ba5100-b02c-4bde-a116-32d6e5e6f73a
    health: HEALTH_WARN
            1 large omap objects
            39 daemons have recently crashed
 
  services:
    mon:        3 daemons, quorum a,b,c (age 4d)
    mgr:        a(active, since 14h)
    mds:        1/1 daemons up, 1 hot standby
    osd:        3 osds: 3 up (since 7d), 3 in (since 7d)
    rbd-mirror: 1 daemon active (1 hosts)
    rgw:        1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   12 pools, 353 pgs
    objects: 201.35k objects, 767 GiB
    usage:   2.3 TiB used, 3.7 TiB / 6 TiB avail
    pgs:     352 active+clean
             1   active+clean+snaptrim


Expected results:


Additional info:

Comment 9 Venky Shankar 2023-02-22 02:38:38 UTC

*** This bug has been marked as a duplicate of bug 2120944 ***


Note You need to log in before you can comment on or make changes to this bug.