Bug 1212524
Summary: | .rgw pool contains extra objects | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ken Dreyer (Red Hat) <kdreyer> |
Component: | RGW | Assignee: | Yehuda Sadeh <yehuda> |
Status: | CLOSED ERRATA | QA Contact: | Warren <wusui> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | 1.2.3 | CC: | cbodley, ceph-eng-bugs, ceph-qe-bugs, flucifre, hnallurv, icolle, jdillama, jherrman, kbader, kdreyer, mbenjamin, nlevine, owasserm, sweil, tbrekke, tmuthami, yehuda |
Target Milestone: | rc | ||
Target Release: | 1.2.4 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-0.80.8-8.el6cp ceph-0.80.8-8.el7cp | Doc Type: | Bug Fix |
Doc Text: |
Recreating a previously existing bucket in RGW previously did not remove the bucket instance metadata object and thus created a redundant object in the RGW pool. This update addresses this problem, and redundant objects are no longer generated in the described scenario.
|
Story Points: | --- |
Clone Of: | 1187821 | Environment: | |
Last Closed: | 2015-09-02 14:07:29 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1187821 | ||
Bug Blocks: |
Description
Ken Dreyer (Red Hat)
2015-04-16 14:52:45 UTC
Need to have pm_ack set. Let's set this back to "ON_QA". The workflow is: 1. ASSIGNED - we know who will do the dev work 2. POST - the work has been done upstream, and is in progress downstream 3. MODIFIED - the work has been committed to dist-git, and a Brew build might have been done. At this point you can attach the bug to an errata advisory. 4. ON_QA - a Brew build has been done and packages are ready for QE to test. The Errata Tool will automatically change a bug from "MODIFIED" to "ON_QA" when the bug gets attached to an advisory. 5. VERIFIED - a QE engineer has verified that the bug has been fixed. This change is made by the QE engineer. The currently-assigned QA Contact is Warren, so he can change the bug from ON_QA to VERIFIED. I tried this on magna022 and I am still running into the same problem after an update. I used the s3test.py code and modified it. I did a conn.create_bucket('aardvark') Then I did a conn.delete_bucket('aardvark') followed by a conn.create_bucket('aardvark') This was 3 separate python runs. I then ran 'sudo rados -p .rgw ls' and saw: .bucket.meta.my-new-bucket:default.4122.2 .bucket.meta.my-new-bucket:default.4122.1 .bucket.meta.aardvark:default.4122.3 aardvark .bucket.meta.aardvark:default.4122.4 sudo rpm -qv ceph-radosgw ceph-radosgw-0.80.8-9.el7cp.x86_64 (In reply to Warren from comment #14) > I tried this on magna022 and I am still running into the same problem after > an update. > > I used the s3test.py code and modified it. I did a > conn.create_bucket('aardvark') > Then I did a conn.delete_bucket('aardvark') followed by a > conn.create_bucket('aardvark') > > This was 3 separate python runs. > > I then ran 'sudo rados -p .rgw ls' and saw: > .bucket.meta.my-new-bucket:default.4122.2 > .bucket.meta.my-new-bucket:default.4122.1 > .bucket.meta.aardvark:default.4122.3 > aardvark > .bucket.meta.aardvark:default.4122.4 > > > sudo rpm -qv ceph-radosgw > ceph-radosgw-0.80.8-9.el7cp.x86_64 Hi Warren, What is your regions/zones setup? Because the Sync process is external to RGW, We don't delete the metadata in case of multiregion or multizone configuration. This is because we don't want to remove metadata that was may have been synced. I think that the issue in Comment 14 was something peculiar to that setup (magna022). I have a single site rgw where this does not happen. Works on 1.2.3.2 iso's for trusty and precise. Works on 1.2.3.2 iso's for Centos 6.7. There is some meta-data left around, but I think that is normal. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1703.html |