In RGW workload, large quantities of objects are accumulated in the data pool that appear to be orphans. The leaked part objects are for completed multi-part uploads. Engineering believes this is the primary underlying issue-the ability to generate arbitrarily-large space leaks by re-uploading the same multi-part part multiple times. This affects all RGW versions which have supported S3 multipart upload. The root cause is that although these RGWs contain logic to detect that the upload part operation has conflicted with a prior upload of the part, the code handling that case addresses the naming conflict but does not correctly accumulate the full set of object names generated by all the upload attempts for a given part, and instead overwrites metadata related to prior uploads of the part with the latest one. To fix this, we propose to move the current serialization and store of part upload metadata into RGW's OSD-side CLS interface, where it is straightforward to combine existing and new part metadata, as well as avoid races between simultaneous uploads of the same part. Secondarily, this extra historical data will be used in the code to clean up completed and aborted multipart uploads. Pull request: https://github.com/ceph/ceph/pull/37260
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days