Description of problem: Recreating a bucket via s3 created a new object in the .rgw pool. Steps to Reproduce: 1. Create a bucket with n s3 client 2. Delete bucket, then recreate the bucket 3. radosgw-admin metadata get bucket:<bucket> shows the correct bucket_id 4. `rados -p .rgw ls` shows the extra object that do not match the bucket_id Expected results: Deleting a bucket should remove the bucket metadata. Additional info: While this is typically not large issue, it is possible for millions of objects to end up in the .rgw pool.
Yehuda, what say you?
Yehuda, what say you? If you are +1 on this, please set devel flag accordingly, thanks.
Updated the upstream bug number. Orit is working on this specific bug. The reason we keep the old object metadata around is that in the case of multi zone configuration we don't want to lose the knowledge about an old bucket instance, in the case it is still referenced by another zone, and we have no way to know whether it's needed or not.
Fix was merged upstream to commit:dfdc7afb59cc8e32cf8bff55faa09076c853de06.
I'll clone this bug so we can track the fix separately in RHCS 1.2 (based on Firefly, upstream's v0.80.8) and RHCS 1.3 (based on Hammer, upstream's v0.94.1)
I've verified that dfdc7afb59cc8e32cf8bff55faa09076c853de06 cherry-picks cleanly onto hammer.
It looks like fe158ecc25feefcea8aea4133118e4a84900a8ec upstream doesn't cherry-pick cleanly to 0.94.1. Yehuda, would you mind providing a cherry-pick for 0.94.1?
Verified on ceph-0.94.1-11.el7cp.x86_64. Deleting bucket also removes its metadata.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2015:1183