Description of problem (please be detailed as possible and provide log snippests): [RDR][CEPHFS] Ceph reports 1 large omap objects $ ceph health detail HEALTH_WARN 1 large omap objects; 39 daemons have recently crashed [WRN] LARGE_OMAP_OBJECTS: 1 large omap objects 1 large objects found in pool 'ocs-storagecluster-cephfilesystem-metadata' Search the cluster log for 'Large omap object found' for more details. Version of all relevant components (if applicable): OCP version:- 4.12.0-0.nightly-2023-01-19-110743 ODF version:- 4.12.0-167 CEPH version:- ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable) ACM version:- v2.7.0 SUBMARINER version:- v0.14.1 VOLSYNC version:- volsync-product.v0.6.0 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1.Deploy RDR cluster 2.Deploy CEPHFS workload 3. check ceph status after 3-4 days Actual results: $ ceph health detail HEALTH_WARN 1 large omap objects; 39 daemons have recently crashed [WRN] LARGE_OMAP_OBJECTS: 1 large omap objects 1 large objects found in pool 'ocs-storagecluster-cephfilesystem-metadata' Search the cluster log for 'Large omap object found' for more details. $ ceph -s cluster: id: 87ba5100-b02c-4bde-a116-32d6e5e6f73a health: HEALTH_WARN 1 large omap objects 39 daemons have recently crashed services: mon: 3 daemons, quorum a,b,c (age 4d) mgr: a(active, since 14h) mds: 1/1 daemons up, 1 hot standby osd: 3 osds: 3 up (since 7d), 3 in (since 7d) rbd-mirror: 1 daemon active (1 hosts) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 12 pools, 353 pgs objects: 201.35k objects, 767 GiB usage: 2.3 TiB used, 3.7 TiB / 6 TiB avail pgs: 352 active+clean 1 active+clean+snaptrim Expected results: Additional info:
*** This bug has been marked as a duplicate of bug 2120944 ***