Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2008835

Summary: [GSS][RGW]Arbitrarily-large space leaks generated by re-uploading the same multi-part part multiple times
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Geo Jose <gjose>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: medium Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 4.2CC: akraj, cbodley, ceph-eng-bugs, kbader, kdreyer, mbenjamin, mmuench, rpollack, tchandra, vereddy
Target Milestone: ---   
Target Release: 7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.0-1 Doc Type: Bug Fix
Doc Text:
.RADOS object multipart upload workflows complete properly. Previously, in some cases, a RADOS object that was part of a multipart upload workflow objects that were created on a previous upload would cause certain parts to not complete or stop in the middle of the upload. With this fix, all parts upload correctly, once the multipart upload workflow is complete.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-12-13 15:18:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2237662    

Description Geo Jose 2021-09-29 10:06:10 UTC
In RGW workload, large quantities of objects are accumulated in the data pool that appear to be orphans. The leaked part objects are for completed multi-part uploads.

Engineering believes this is the primary underlying issue-the ability to generate arbitrarily-large space leaks by re-uploading the same multi-part part multiple times.  This affects all RGW versions which have supported S3 multipart upload.

The root cause is that although these RGWs contain logic to detect that the upload part operation has conflicted with a prior upload of the part, the code handling that case addresses the naming conflict but does not correctly accumulate the full set of object names generated by all the upload attempts for a given part, and instead overwrites metadata related to prior uploads of the part with the latest one.

To fix this, we propose to move the current serialization and store of part upload metadata into RGW's OSD-side CLS interface, where it is straightforward to combine existing and new part metadata, as well as avoid races between simultaneous uploads of the same part.  Secondarily, this extra historical data will be used in the code to clean up completed and aborted multipart uploads.


Pull request: https://github.com/ceph/ceph/pull/37260

Comment 1 RHEL Program Management 2021-09-29 10:06:17 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 26 errata-xmlrpc 2023-12-13 15:18:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780

Comment 27 Red Hat Bugzilla 2024-04-12 04:25:04 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days