Bug 1212524 - .rgw pool contains extra objects
Summary: .rgw pool contains extra objects
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 1.2.3
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: rc
: 1.2.4
Assignee: Yehuda Sadeh
QA Contact: Warren
URL:
Whiteboard:
Depends On: 1187821
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-16 14:52 UTC by Ken Dreyer (Red Hat)
Modified: 2017-07-30 15:40 UTC (History)
17 users (show)

Fixed In Version: ceph-0.80.8-8.el6cp ceph-0.80.8-8.el7cp
Doc Type: Bug Fix
Doc Text:
Recreating a previously existing bucket in RGW previously did not remove the bucket instance metadata object and thus created a redundant object in the RGW pool. This update addresses this problem, and redundant objects are no longer generated in the described scenario.
Clone Of: 1187821
Environment:
Last Closed: 2015-09-02 14:07:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 11149 0 None None None Never
Ceph Project Bug Tracker 11416 0 None None None Never
Red Hat Product Errata RHBA-2015:1703 0 normal SHIPPED_LIVE ceph-radosgw and librbd package bug-fix update 2015-09-02 18:07:13 UTC

Description Ken Dreyer (Red Hat) 2015-04-16 14:52:45 UTC
+++ This bug was initially created as a clone of Bug #1187821 +++

Description of problem:

Recreating a bucket via s3 created a new object in the .rgw pool. 

Steps to Reproduce:
1. Create a bucket with n s3 client
2. Delete bucket, then recreate the bucket
3. radosgw-admin metadata get bucket:<bucket> shows the correct bucket_id
4. `rados -p .rgw ls` shows the extra object that do not match the bucket_id


Expected results:
Deleting a bucket should remove the bucket metadata. 

Additional info:
While this is typically not large issue, it is possible for millions of objects to end up in the .rgw pool.

--- Additional comment from Federico Lucifredi on 2015-03-25 19:55:31 EDT ---

Yehuda, what say you?

--- Additional comment from Federico Lucifredi on 2015-03-25 19:57:04 EDT ---

Yehuda, what say you? 

If you are +1 on this, please set devel flag accordingly, thanks.

--- Additional comment from Yehuda Sadeh on 2015-03-25 20:09:58 EDT ---

Updated the upstream bug number. Orit is working on this specific bug. The reason we keep the old object metadata around is that in the case of multi zone configuration we don't want to lose the knowledge about an old bucket instance, in the case it is still referenced by another zone, and we have no way to know whether it's needed or not.

--- Additional comment from Yehuda Sadeh on 2015-04-07 13:45:36 EDT ---

Fix was merged upstream to commit:dfdc7afb59cc8e32cf8bff55faa09076c853de06.

--- Additional comment from Ken Dreyer (Red Hat) on 2015-04-16 10:51:53 EDT ---

I'll clone this bug so we can track the fix separately in RHCS 1.2 (based on Firefly, upstream's v0.80.8) and RHCS 1.3 (based on Hammer, upstream's v0.94.1)

Comment 5 Yehuda Sadeh 2015-04-17 00:07:05 UTC
Need to have pm_ack set.

Comment 7 Ken Dreyer (Red Hat) 2015-04-17 15:50:25 UTC
Let's set this back to "ON_QA". The workflow is:

1. ASSIGNED - we know who will do the dev work

2. POST - the work has been done upstream, and is in progress downstream

3. MODIFIED - the work has been committed to dist-git, and a Brew build might have been done. At this point you can attach the bug to an errata advisory.

4. ON_QA - a Brew build has been done and packages are ready for QE to test. The Errata Tool will automatically change a bug from "MODIFIED" to "ON_QA" when the bug gets attached to an advisory.

5. VERIFIED - a QE engineer has verified that the bug has been fixed. This change is made by the QE engineer.

The currently-assigned QA Contact is Warren, so he can change the bug from ON_QA to VERIFIED.

Comment 14 Warren 2015-04-28 00:37:05 UTC
I tried this on magna022 and I am still running into the same problem after an update.

I used the s3test.py code and modified it.  I did a conn.create_bucket('aardvark')
Then I did a conn.delete_bucket('aardvark') followed by a conn.create_bucket('aardvark')

This was 3 separate python runs.

I then ran 'sudo rados -p .rgw ls' and saw:
.bucket.meta.my-new-bucket:default.4122.2
.bucket.meta.my-new-bucket:default.4122.1
.bucket.meta.aardvark:default.4122.3
aardvark
.bucket.meta.aardvark:default.4122.4


sudo rpm -qv ceph-radosgw 
ceph-radosgw-0.80.8-9.el7cp.x86_64

Comment 15 Orit Wasserman 2015-04-28 10:47:33 UTC
(In reply to Warren from comment #14)
> I tried this on magna022 and I am still running into the same problem after
> an update.
> 
> I used the s3test.py code and modified it.  I did a
> conn.create_bucket('aardvark')
> Then I did a conn.delete_bucket('aardvark') followed by a
> conn.create_bucket('aardvark')
> 
> This was 3 separate python runs.
> 
> I then ran 'sudo rados -p .rgw ls' and saw:
> .bucket.meta.my-new-bucket:default.4122.2
> .bucket.meta.my-new-bucket:default.4122.1
> .bucket.meta.aardvark:default.4122.3
> aardvark
> .bucket.meta.aardvark:default.4122.4
> 
> 
> sudo rpm -qv ceph-radosgw 
> ceph-radosgw-0.80.8-9.el7cp.x86_64

Hi Warren,
What is your regions/zones setup?

Because the Sync process is external to RGW, We don't delete the metadata in case of multiregion or multizone configuration. This is because we don't want to remove metadata that was may have been synced.

Comment 18 Warren 2015-08-14 22:48:22 UTC
I think that the issue in Comment 14 was something peculiar to that setup (magna022).  I have a single site rgw where this does not happen.

Comment 19 Warren 2015-08-26 05:09:33 UTC
Works on 1.2.3.2 iso's for trusty and precise.

Comment 20 Warren 2015-09-01 02:15:54 UTC
Works on 1.2.3.2 iso's for Centos 6.7.  There is some meta-data left around, but I think that is normal.

Comment 22 errata-xmlrpc 2015-09-02 14:07:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1703.html


Note You need to log in before you can comment on or make changes to this bug.