Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1318409 - RGW deletion is sequential and slow on large buckets of objects
RGW deletion is sequential and slow on large buckets of objects
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RGW (Show other bugs)
1.3.2
x86_64 Linux
high Severity high
: rc
: 2.1
Assigned To: Yehuda Sadeh
Vasishta
Bara Ancincova
:
Depends On:
Blocks: 1383917
  Show dependency treegraph
 
Reported: 2016-03-16 15:20 EDT by Benjamin Schmaus
Modified: 2017-07-30 11:54 EDT (History)
13 users (show)

See Also:
Fixed In Version: RHEL: ceph-10.2.3-5.el7cp Ubuntu: ceph_10.2.3-6redhat1xenial
Doc Type: Bug Fix
Doc Text:
.Ceph Object Gateway now deletes large buckets in parallel Previously, the Ceph Object Gateway was unable to delete multiple large buckets at the same time. As a consequence, the process of deleting large buckets containing millions of objects was slow. A patch has been applied, and the Ceph Object Gateway now deletes large buckets in parallel, which makes the whole process significantly faster.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-11-22 14:25:06 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2901171 None None None 2017-02-02 16:05 EST
Ceph Project Bug Tracker 15557 None None None 2016-09-26 19:09 EDT
Red Hat Product Errata RHSA-2016:2815 normal SHIPPED_LIVE Moderate: Red Hat Ceph Storage security, bug fix, and enhancement update 2017-03-21 22:06:33 EDT

  None (edit)
Description Benjamin Schmaus 2016-03-16 15:20:37 EDT
Description of problem:  When using RGW to delete large buckets with 50-100 or more million objects the process is slow and appears to be sequential in nature.


Version-Release number of selected component (if applicable):1.3.2


How reproducible:100%


Steps to Reproduce:
1.Create a Ceph cluster on 1.3.2
2.Load up a bucket with many millions of objects
3.Delete objects in the bucket

Actual results: Slow process takes time to delete.


Expected results: RGW should be able to delete multiple objects in parallel and speed up the process when doing large deletes


Additional info:
Comment 21 Ken Dreyer (Red Hat) 2016-09-26 19:09:26 EDT
Jewel backport ongoing at https://github.com/ceph/ceph/pull/10661 for v10.2.4 (currently fails to build there, looks like the RGW team will need to resolve the failure)
Comment 22 Matt Benjamin (redhat) 2016-09-29 17:22:56 EDT
(In reply to Ken Dreyer (Red Hat) from comment #21)
> Jewel backport ongoing at https://github.com/ceph/ceph/pull/10661 for
> v10.2.4 (currently fails to build there, looks like the RGW team will need
> to resolve the failure)

PR #10661 updated with a proposed compile fix and motivation.
Comment 24 Matt Benjamin (redhat) 2016-10-06 11:22:18 EDT
To reproduce, create an rgw bucket and put any substantial number of objects, then measure the time required to sequentially delete each object and then the bucket.

After this change, that time can be compared with the result of

"time radosgw-admin bucket rm --bucket=<bucket name> --bypass-gc --purge-objects"

having first recreated the bucket and objects.
Comment 27 Vasishta 2016-11-02 08:01:41 EDT
Hi,

I simultaneously tried purging a bucket and deleting all objects in another bucket . Each of those bucket had 5 million objects and buckets were of same size. When one bucket got purged, another bucket had more than 1.8 million objects.
Purging took 64 hours and deletion took 110+ hours.


As purging is faster than sequential deletion as per above observation I'm closing this bug.

Please let me know if there any concerns or issues.


Regards,
Vasishta
Comment 30 errata-xmlrpc 2016-11-22 14:25:06 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2815.html

Note You need to log in before you can comment on or make changes to this bug.