Bug 2294621

Summary: Server-side copy leading to orphaned rados tail objects
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Casey Bodley <cbodley>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Chaithra <ckulal>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 7.0CC: akraj, ceph-eng-bugs, cephqe-warriors, ckulal, mbenjamin, mkasturi, mwatts, rpollack, tserlin
Target Milestone: ---   
Target Release: 7.1z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.1-215.el9cp Doc Type: Bug Fix
Doc Text:
.Removing an S3 object now properly frees storage space Previously, in some cases when removing CopyObject size was larger than 4 MB, the object did not properly free all storage space that was used by that object. With this fix the source and destination handles are passed into various RGWRados call paths explicitly and the storage frees up, as expected.
Story Points: ---
Clone Of: 2294620 Environment:
Last Closed: 2024-08-07 11:20:49 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2294620    
Bug Blocks:    

Description Casey Bodley 2024-06-27 18:28:57 UTC
+++ This bug was initially created as a clone of Bug #2294620 +++

Description of problem:

After using server-side copy and deleting both copies, their associated rados tail objects are orphaned. This leaks storage capacity.


Version-Release number of selected component (if applicable):


How reproducible:

100%


Steps to Reproduce:

on a fresh ceph cluster,

1. create a bucket
$ s3cmd mb s3://testbucket

2. upload an object larger than 4mb
$ s3cmd put 128m.iso s3://testbucket

3. create a server-side copy of the object
$ s3cmd cp s3://testbucket/128m.iso s3://testbucket/128m.bak

4. delete both copies of the object
$ s3cmd rm s3://testbucket/128m.iso
$ s3cmd rm s3://testbucket/128m.bak

5. run garbage collection to delete eligible tail objects
$ radosgw-admin gc process --include-all

6. list the contents of the data pool (may not start with "default." - use `rados lspools` to find it)
$ rados -p default.rgw.buckets.data ls

Actual results:

orphaned tail objects are still present in 'rados ls' output

Expected results:

all tail objects are garbage collected and 'rados ls' output is empty

Additional info:

--- Additional comment from Storage PM bot on 2024-06-27 18:26:04 UTC ---

Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 1 Storage PM bot 2024-06-27 18:29:10 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 8 errata-xmlrpc 2024-08-07 11:20:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.1 security and bug fix update.), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:5080

Comment 9 Red Hat Bugzilla 2024-12-06 04:25:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days