Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2364323

Summary: RGW bucket index downsharding does not shrink shards to default 11 shards after LC deletion of large number of objects.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vidushi Mishra <vimishra>
Component: RGWAssignee: J. Eric Ivancich <ivancich>
Status: CLOSED ERRATA QA Contact: Vidushi Mishra <vimishra>
Severity: high Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 8.1CC: ceph-eng-bugs, cephqe-warriors, ivancich, mbenjamin, rpollack, tserlin
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-26 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 06:48:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vidushi Mishra 2025-05-06 07:35:17 UTC
Description of problem:

In a Ceph RGW multisite setup, bucket index downsharding does not shrink the shard count to the default shards of 11 after lifecycle expiration deleted a large number of objects.

The test scenario involved ~100M objects deletion spread across 4 buckets (~25M objects per bucket). 

After complete object deletion via lifecycle expiration, the bucket index shards were reduced to only 59–61 shards instead of the expected default value of 11,  leaving behind more shards than expected, despite the bucket having zero objects per index shard.

Version-Release number of selected component (if applicable):
ceph version 19.2.1-147.el9cp

How reproducible:
 seen at scale of 100 million object deletion

Steps to Reproduce
1. Create 4 buckets.

2. Upload ~25 million objects per bucket (~100 million total).

3. Ensure rgw dynamic resharding reshards the bucket index to higher shard counts as expected. (~770+ shards per bucket)

4. Configure and apply S3 lifecycle expiration to delete all objects in all buckets.

5. Wait for lifecycle expiration to complete and remove all objects.

6. Observe the automatic downsharding behavior of bucket indexes after deletions.

Actual results:

Buckets downsharded to only 59 or 61 shards, not to 11, despite zero objects remaining.

Expected results:

After lifecycle expiration removes all objects and index shrinks,
Bucket index shards should downshard to the default shard count (11 shards)

Additional info:

There was no tweak done to the rgw reshard configs.

Comment 13 errata-xmlrpc 2026-01-29 06:48:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536