Bug 2294620 - Server-side copy leading to orphaned rados tail objects
Summary: Server-side copy leading to orphaned rados tail objects
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 8.0
Assignee: Casey Bodley
QA Contact: Chaithra
URL:
Whiteboard:
Depends On:
Blocks: 2294621 2317218
TreeView+ depends on / blocked
 
Reported: 2024-06-27 18:25 UTC by Casey Bodley
Modified: 2025-03-26 04:25 UTC (History)
7 users (show)

Fixed In Version: ceph-19.1.1-49.el9cp
Doc Type: Bug Fix
Doc Text:
.Removing an S3 object now properly frees storage space Previously, in some cases when removing the CopyObject and the size was larger than 4 MB, the object did not properly free all storage space that was used by that object. With this fix the source and destination handles are passed into various RGWRados call paths explicitly and the storage frees up, as expected.
Clone Of:
: 2294621 (view as bug list)
Environment:
Last Closed: 2024-11-25 09:02:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 66286 0 None None None 2024-06-27 18:25:55 UTC
Red Hat Issue Tracker RHCEPH-9246 0 None None None 2024-06-27 18:26:21 UTC
Red Hat Product Errata RHBA-2024:10216 0 None None None 2024-11-25 09:02:19 UTC

Description Casey Bodley 2024-06-27 18:25:56 UTC
Description of problem:

After using server-side copy and deleting both copies, their associated rados tail objects are orphaned. This leaks storage capacity.


Version-Release number of selected component (if applicable):


How reproducible:

100%


Steps to Reproduce:

on a fresh ceph cluster,

1. create a bucket
$ s3cmd mb s3://testbucket

2. upload an object larger than 4mb
$ s3cmd put 128m.iso s3://testbucket

3. create a server-side copy of the object
$ s3cmd cp s3://testbucket/128m.iso s3://testbucket/128m.bak

4. delete both copies of the object
$ s3cmd rm s3://testbucket/128m.iso
$ s3cmd rm s3://testbucket/128m.bak

5. run garbage collection to delete eligible tail objects
$ radosgw-admin gc process --include-all

6. list the contents of the data pool (may not start with "default." - use `rados lspools` to find it)
$ rados -p default.rgw.buckets.data ls

Actual results:

orphaned tail objects are still present in 'rados ls' output

Expected results:

all tail objects are garbage collected and 'rados ls' output is empty

Additional info:

Comment 1 Storage PM bot 2024-06-27 18:26:04 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 10 errata-xmlrpc 2024-11-25 09:02:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216

Comment 11 Red Hat Bugzilla 2025-03-26 04:25:37 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.