Bug 1514210 - rgw: Fix swift object expiry not deleting objects
Summary: rgw: Fix swift object expiry not deleting objects
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 2.4
Hardware: All
OS: All
unspecified
high
Target Milestone: rc
: 2.5
Assignee: Matt Benjamin (redhat)
QA Contact: Vidushi Mishra
Aron Gunn
URL:
Whiteboard:
: 1512333 (view as bug list)
Depends On: 1530673
Blocks: 1536401
TreeView+ depends on / blocked
 
Reported: 2017-11-16 20:03 UTC by Matt Benjamin (redhat)
Modified: 2021-03-11 16:19 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-10.2.10-13.el7cp Ubuntu: ceph_10.2.10-10redhat1
Doc Type: Bug Fix
Doc Text:
.Objects eligible for expiration are no longer infrequently passed over Previously, due to an off-by-one error in expiration processing in the Ceph Object Gateway, objects eligible for expiration could infrequently be passed over, and consequently were not removed. The underlying source code has been modified, and the objects are no longer passed over.
Clone Of:
: 1530673 (view as bug list)
Environment:
Last Closed: 2018-02-21 19:46:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 22084 0 None None None 2017-11-16 20:03:23 UTC
Red Hat Product Errata RHBA-2018:0340 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.5 bug fix and enhancement update 2018-02-22 00:50:32 UTC

Description Matt Benjamin (redhat) 2017-11-16 20:03:23 UTC
Description of problem:
In cls_timeindex_list() though `to_index` has expired for a timespan, the marker is set for a subsequent index during the time boundary check. This marker is further returned to RGWObjectExpirer::process_single_shard(), where this out_marker is trimmed from the respective shard, resulting in a lost removal hint and a leaked object.

Reproducer in tracker.

Comment 6 Ken Dreyer (Red Hat) 2018-01-03 15:56:36 UTC
This bug is targeted for RHCEPH 2.5 and this fix is not in RHCEPH 3.

Would you please cherry-pick the change to ceph-3.0-rhel-patches (with the RHCEPH 3 clone ID number, "Resolves: rhbz#1530673") so customers do not experience a regression?

Comment 7 Ken Dreyer (Red Hat) 2018-01-03 15:59:28 UTC
*** Bug 1512333 has been marked as a duplicate of this bug. ***

Comment 19 Vidushi Mishra 2018-01-23 08:25:04 UTC
Not observing the issue again. As per the steps followed in the comment#11, moving BZ to verified in ceph version 10.2.10-14.el7cp .

Comment 24 errata-xmlrpc 2018-02-21 19:46:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0340


Note You need to log in before you can comment on or make changes to this bug.