Bug 1187821 - .rgw pool contains extra objects
Summary: .rgw pool contains extra objects
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 1.2.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: pre-dev-freeze
: 1.3.0
Assignee: Yehuda Sadeh
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1212524
TreeView+ depends on / blocked
 
Reported: 2015-01-30 22:07 UTC by tbrekke
Modified: 2022-02-21 18:41 UTC (History)
13 users (show)

Fixed In Version: ceph-0.94.1-3.el7cp
Doc Type: Bug Fix
Doc Text:
Recreating a previously existing node in RGW previously did not remove the node instance metadata object and thus created a redundant object in the RGW pool. This update addresses this problem, and redundant objects are no longer generated in the described scenario.
Clone Of:
: 1212524 (view as bug list)
Environment:
Last Closed: 2015-06-24 15:50:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 11149 0 None None None Never
Red Hat Issue Tracker RHCEPH-3521 0 None None None 2022-02-21 18:41:29 UTC
Red Hat Product Errata RHBA-2015:1183 0 normal SHIPPED_LIVE Ceph bug fix and enhancement update 2015-06-24 19:49:46 UTC

Description tbrekke 2015-01-30 22:07:56 UTC
Description of problem:

Recreating a bucket via s3 created a new object in the .rgw pool. 

Steps to Reproduce:
1. Create a bucket with n s3 client
2. Delete bucket, then recreate the bucket
3. radosgw-admin metadata get bucket:<bucket> shows the correct bucket_id
4. `rados -p .rgw ls` shows the extra object that do not match the bucket_id


Expected results:
Deleting a bucket should remove the bucket metadata. 

Additional info:
While this is typically not large issue, it is possible for millions of objects to end up in the .rgw pool.

Comment 1 Federico Lucifredi 2015-03-25 23:55:31 UTC
Yehuda, what say you?

Comment 2 Federico Lucifredi 2015-03-25 23:57:04 UTC
Yehuda, what say you? 

If you are +1 on this, please set devel flag accordingly, thanks.

Comment 3 Yehuda Sadeh 2015-03-26 00:09:58 UTC
Updated the upstream bug number. Orit is working on this specific bug. The reason we keep the old object metadata around is that in the case of multi zone configuration we don't want to lose the knowledge about an old bucket instance, in the case it is still referenced by another zone, and we have no way to know whether it's needed or not.

Comment 4 Yehuda Sadeh 2015-04-07 17:45:36 UTC
Fix was merged upstream to commit:dfdc7afb59cc8e32cf8bff55faa09076c853de06.

Comment 5 Ken Dreyer (Red Hat) 2015-04-16 14:51:53 UTC
I'll clone this bug so we can track the fix separately in RHCS 1.2 (based on Firefly, upstream's v0.80.8) and RHCS 1.3 (based on Hammer, upstream's v0.94.1)

Comment 6 Ken Dreyer (Red Hat) 2015-04-16 15:09:59 UTC
I've verified that dfdc7afb59cc8e32cf8bff55faa09076c853de06 cherry-picks cleanly onto hammer.

Comment 10 Ken Dreyer (Red Hat) 2015-04-16 23:19:36 UTC
It looks like fe158ecc25feefcea8aea4133118e4a84900a8ec upstream doesn't cherry-pick cleanly to 0.94.1. Yehuda, would you mind providing a cherry-pick for 0.94.1?

Comment 13 shilpa 2015-06-15 08:15:48 UTC
Verified on ceph-0.94.1-11.el7cp.x86_64. Deleting bucket also removes its metadata.

Comment 15 errata-xmlrpc 2015-06-24 15:50:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2015:1183


Note You need to log in before you can comment on or make changes to this bug.