Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2356678

Summary: rgw: tail objects are wrongly deleted in copy_object
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Matt Benjamin (redhat) <mbenjamin>
Component: RGWAssignee: J. Eric Ivancich <ivancich>
Status: CLOSED ERRATA QA Contact: Madhavi Kasturi <mkasturi>
Severity: urgent Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 8.0CC: ceph-eng-bugs, cephqe-warriors, ckulal, ivancich, mcaldeir, rpollack, tpetr, tserlin
Target Milestone: ---   
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-131.el9cp Doc Type: Bug Fix
Doc Text:
.Tail objects no longer wrongly deleted with `copy-object` Previously, there was a reference count invariant on tail objects that was not maintained when an object was copied to itself. This caused the existing object was changed, rather than copied. As a result, references to tail objects were being decremented. When the refcount on tail objects dropped to 0, they were deleted during the next garbage collection (GC) cycle. With this fix, the refcount on tail objects is no longer decremented when completing a copy-to-self.
Story Points: ---
Clone Of:
: 2359825 2366626 (view as bug list) Environment:
Last Closed: 2025-06-26 12:21:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2351689, 2359825, 2366626    

Description Matt Benjamin (redhat) 2025-04-01 16:25:20 UTC
This happens in both squid and main on objects that are larger enough to have tail rados objects (>4MB).
Reproducer:

    start a vstart cluster
    do the following on client side

    $ s3cmd -c s3cfg.vstart --host=http://localhost:8000 mb s3://bar
    $ s3cmd -c s3cfg.vstart --host=http://localhost:8000 put ../test_files/file_10m s3://bar/file_10m
    $ aws s3api --profile vstart --endpoint=http://localhost:8000 copy-object --copy-source bar/file_10m --key file_10m --bucket bar --metadata-directive "REPLACE" --content-type "text/plain" 

    in vstart

    $ bin/radosgw-admin -n client.rgw.8000 gc process --include-all

    on client side

    $ aws s3api --profile vstart --endpoint=http://localhost:8000 head-object --key file_10m --bucket bar
    {
        "AcceptRanges": "bytes",
        "LastModified": "2025-04-01T13:30:03+00:00",
        "ContentLength": 10485760,
        "ETag": "\"1e1d3a01dfedd497cbdd0ca9a39b1e72-2\"",
        "ContentType": "text/plain",
        "Metadata": {},
        "PartsCount": 2
    }

    $ aws s3api --profile vstart --endpoint=http://localhost:8000 get-object --key file_10m --bucket bar file_10m
    argument of type 'NoneType' is not iterable


    get-object fails with "NoSuchKey".

Comment 5 Tomas Petr 2025-04-30 09:45:05 UTC
*** Bug 2363050 has been marked as a duplicate of this bug. ***

Comment 9 errata-xmlrpc 2025-06-26 12:21:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775

Comment 10 Red Hat Bugzilla 2025-10-25 04:25:40 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days