Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2329994

Summary: [rgw/listing][rfe]: Timeout in list-objects-v2 operation for large bucket listing
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vidushi Mishra <vimishra>
Component: RGWAssignee: J. Eric Ivancich <ivancich>
Status: CLOSED ERRATA QA Contact: Vidushi Mishra <vimishra>
Severity: urgent Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 7.1CC: ceph-eng-bugs, cephqe-warriors, ivancich, mbenjamin, mkasturi, rpollack, tserlin
Target Milestone: ---Keywords: FutureFeature
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-26 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 06:53:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vidushi Mishra 2024-12-02 17:36:14 UTC
Description of problem:

When performing a list-objects-v2 operation for a bucket diagnostic-logs-test1, which is configured with 1999 shards, contains 30 million regular objects, and 1.87 million objects in the _multipart_ namespace, the operation times out after listing approximately 26 million objects. However, when paginated with --max-items set to 1000, 100K, or 3M, the operation completes successfully.

Observed Issue:
----------------

Full listing of the bucket without specifying --max-items fails with the error:
Read from remote host 10.x.x.x: Connection reset by peer  
Connection to 10.x.x.x closed.


Snippet of the timeout
------------------------
[root@depressa017 ~]# date; time aws --endpoint http://10.x.x.x:80 s3api list-objects-v2 --bucket diagnostic-logs-test1
.
.
.

        {
            "Key": "data/N27/good/193_tasking/2024-10-31/25/compacted-part-f55a5b45-f11f-4dd7-91e0-79658ca61548-0-26403402",
            "LastModified": "2024-11-28T07:56:01.077Z",
            "ETag": "\"d17951d5ae5b47977b774563ada908f1\"",
            "Size": 4000,
            "StorageClass": "STANDARD"
        },
        {
            "Key": "data/N27/good/193_tasking/2024-10-31/25/compacted-part-f55a5b45-f11f-4dd7-91e0-79658ca61548-0-26403403",
            "LastModified": "2024-11-28T07:56:01.470Z",
            "ETag": "\"343f3b027b7ee6929eba593c1a38a259\"",
            "Size": 4000,
            "StorageClass": "STANDARD"
        },
        {
            "Key": "data/N27/good/193_tasking/2024-10-31/25/compacted-part-f55a5b45-f11f-4dd7-91e0-79658ca61548-0-26403404",
            "LastModified": "2024-11-28T07:56:01.518Z",
            "ETag": "\"7d1e589a7ea6c1946a893c77227b62f4\"",
            "Size": 4000,
            "StorageClass": "STANDARD"
        },
        {
            "Key": "data/N27/good/193_tasking/2024-10-31/25/compacted-part-f55a5b45-f11f-4dd7-91e0-79658ca61548-0-26403405",
            Read from remote host 10.x.x.x: Connection reset by peer
Connection to 10.x.x.x closed.


Bucket Stats
---------------------

[root@depressa016 ~]# date; radosgw-admin bucket stats --bucket diagnostic-logs-test1 | egrep 'bucket|id|owner|shards|objects'
Mon Dec  2 03:30:22 PM UTC 2024
    "bucket": "diagnostic-logs-test1",
    "num_shards": 1999,
    "id": "019a4b7a-9ed6-444e-8eef-30e6292ef70b.254317.1",
    "owner": "user1",
            "num_objects": 30000000
            "num_objects": 1871018
    "bucket_quota": {
        "max_objects": -1


The issue arises after processing approximately 26 million objects.

Command Used:

- Full Listing (Fails):

aws --endpoint http://10.x.x.x:80 s3api list-objects-v2 --bucket diagnostic-logs-test1

- Paginated Listing (Successful):

aws --endpoint http://10.x.x.x:80 s3api list-objects-v2 --bucket diagnostic-logs-test1 --max-items <1000|100000|3000000>

Version-Release number of selected component (if applicable):
ceph version 18.2.1-229.0.hotfix.bz2327880.el9cp (1b5b765b240e2172c930f14b762fcd7a11147e50) reef (stable)": 28

How reproducible:

seen on this bucket 3/3

Steps to Reproduce:
1. 
2.
3.

Actual results:


Expected results:


Additional info:

Comment 10 errata-xmlrpc 2026-01-29 06:53:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536