+++ This bug was initially created as a clone of Bug #1187821 +++ Description of problem: Recreating a bucket via s3 created a new object in the .rgw pool. Steps to Reproduce: 1. Create a bucket with n s3 client 2. Delete bucket, then recreate the bucket 3. radosgw-admin metadata get bucket:<bucket> shows the correct bucket_id 4. `rados -p .rgw ls` shows the extra object that do not match the bucket_id Expected results: Deleting a bucket should remove the bucket metadata. Additional info: While this is typically not large issue, it is possible for millions of objects to end up in the .rgw pool. --- Additional comment from Federico Lucifredi on 2015-03-25 19:55:31 EDT --- Yehuda, what say you? --- Additional comment from Federico Lucifredi on 2015-03-25 19:57:04 EDT --- Yehuda, what say you? If you are +1 on this, please set devel flag accordingly, thanks. --- Additional comment from Yehuda Sadeh on 2015-03-25 20:09:58 EDT --- Updated the upstream bug number. Orit is working on this specific bug. The reason we keep the old object metadata around is that in the case of multi zone configuration we don't want to lose the knowledge about an old bucket instance, in the case it is still referenced by another zone, and we have no way to know whether it's needed or not. --- Additional comment from Yehuda Sadeh on 2015-04-07 13:45:36 EDT --- Fix was merged upstream to commit:dfdc7afb59cc8e32cf8bff55faa09076c853de06. --- Additional comment from Ken Dreyer (Red Hat) on 2015-04-16 10:51:53 EDT --- I'll clone this bug so we can track the fix separately in RHCS 1.2 (based on Firefly, upstream's v0.80.8) and RHCS 1.3 (based on Hammer, upstream's v0.94.1)
Need to have pm_ack set.
Let's set this back to "ON_QA". The workflow is: 1. ASSIGNED - we know who will do the dev work 2. POST - the work has been done upstream, and is in progress downstream 3. MODIFIED - the work has been committed to dist-git, and a Brew build might have been done. At this point you can attach the bug to an errata advisory. 4. ON_QA - a Brew build has been done and packages are ready for QE to test. The Errata Tool will automatically change a bug from "MODIFIED" to "ON_QA" when the bug gets attached to an advisory. 5. VERIFIED - a QE engineer has verified that the bug has been fixed. This change is made by the QE engineer. The currently-assigned QA Contact is Warren, so he can change the bug from ON_QA to VERIFIED.
I tried this on magna022 and I am still running into the same problem after an update. I used the s3test.py code and modified it. I did a conn.create_bucket('aardvark') Then I did a conn.delete_bucket('aardvark') followed by a conn.create_bucket('aardvark') This was 3 separate python runs. I then ran 'sudo rados -p .rgw ls' and saw: .bucket.meta.my-new-bucket:default.4122.2 .bucket.meta.my-new-bucket:default.4122.1 .bucket.meta.aardvark:default.4122.3 aardvark .bucket.meta.aardvark:default.4122.4 sudo rpm -qv ceph-radosgw ceph-radosgw-0.80.8-9.el7cp.x86_64
(In reply to Warren from comment #14) > I tried this on magna022 and I am still running into the same problem after > an update. > > I used the s3test.py code and modified it. I did a > conn.create_bucket('aardvark') > Then I did a conn.delete_bucket('aardvark') followed by a > conn.create_bucket('aardvark') > > This was 3 separate python runs. > > I then ran 'sudo rados -p .rgw ls' and saw: > .bucket.meta.my-new-bucket:default.4122.2 > .bucket.meta.my-new-bucket:default.4122.1 > .bucket.meta.aardvark:default.4122.3 > aardvark > .bucket.meta.aardvark:default.4122.4 > > > sudo rpm -qv ceph-radosgw > ceph-radosgw-0.80.8-9.el7cp.x86_64 Hi Warren, What is your regions/zones setup? Because the Sync process is external to RGW, We don't delete the metadata in case of multiregion or multizone configuration. This is because we don't want to remove metadata that was may have been synced.
I think that the issue in Comment 14 was something peculiar to that setup (magna022). I have a single site rgw where this does not happen.
Works on 1.2.3.2 iso's for trusty and precise.
Works on 1.2.3.2 iso's for Centos 6.7. There is some meta-data left around, but I think that is normal.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1703.html