Bug 2239831
Summary: | [rgw-ms][archive]: Observing large omaps on the archive zone for a few buckets uploaded with ~1.65M objects which were not adequately dynamically resharded. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vidushi Mishra <vimishra> | |
Component: | RGW-Multisite | Assignee: | shilpa <smanjara> | |
Status: | CLOSED ERRATA | QA Contact: | Vidushi Mishra <vimishra> | |
Severity: | high | Docs Contact: | Rivka Pollack <rpollack> | |
Priority: | unspecified | |||
Version: | 6.1 | CC: | akraj, ceph-eng-bugs, cephqe-warriors, hklein, mbenjamin, rsachere, smanjara, tchandra, tserlin | |
Target Milestone: | --- | |||
Target Release: | 7.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | ceph-18.2.0-103.el9cp | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2255983 (view as bug list) | Environment: | ||
Last Closed: | 2023-12-13 15:23:24 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2237662, 2255983 |
Comment 12
shilpa
2023-10-11 05:05:40 UTC
(In reply to shilpa from comment #12) > thanks for the logs Vidushi. > > analyzed one of the bucket index shards > '.dir.8dc62baa-afc5-496b-9ac1-692c719856ff.324610.4.1.5' for newcontainer-3 > that has a large omap with > 200k entries. > > # rados -p archive.rgw.buckets.index listomapkeys > .dir.8dc62baa-afc5-496b-9ac1-692c719856ff.324610.4.1.5 ls | wc -l > 229646 > > the following two reasons could be the cause: > > 1. I see that both primary and secondary zones have resharded the bucket > 'newcontainer-3' once with num_shards~=127 respectively, which is correct. > whereas on the archive zone, bucket has been dynamically resharded to only > num_shards=29. > > in line https://github.com/ceph/ceph/blob/main/src/rgw/rgw_quota.cc#L976, > for multisite configurations we set the obj_multiplier to 8 instead of 2 to > increase the shard count faster to reduce the number of reshard events > required. But recently we disabled data logging in archive zone setting this > bool to false in RGWRados::check_bucket_shards(), thus defaulting to the > non-multisite reshard multiplier of 2, which is probably why the bucket is > resharded to only 29 shards. > > Hi Shilpa, Since the dynamic resharding is not adequately resharding at the archive, should we try a manual reshard on the bucket to test if that helps resolve the large omaps? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |