Bug 2107404 - [RHCS 6] removing snapshots created in nautilus after upgrading to pacific leaves clones around
Summary: [RHCS 6] removing snapshots created in nautilus after upgrading to pacific le...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 5.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 6.0
Assignee: Matan Breizman
QA Contact: skanta
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2107405 2126050
TreeView+ depends on / blocked
 
Reported: 2022-07-14 22:38 UTC by Vikhyat Umrao
Modified: 2023-03-20 18:57 UTC (History)
19 users (show)

Fixed In Version: ceph-17.2.3-12.el9cp
Doc Type: Bug Fix
Doc Text:
.Users can remove cloned objects after upgrading a cluster Previously, after upgrading a cluster from {storage-product} 4 to {storage-product} 5 , removing snapshots of objects created in earlier versions would leave clones, which could not be removed. This was because the SnapMapper key’s were wrongly converted. With this fix, SnapMapper’s legacy conversation was updated to match the new key format and cloned objects in earlier versions of Ceph can now be easily removed after an upgrade.
Clone Of:
: 2107405 (view as bug list)
Environment:
Last Closed: 2023-03-20 18:57:08 UTC
Embargoed:
mbreizma: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 56147 0 None None None 2022-07-14 22:38:56 UTC
Github ceph ceph pull 47133 0 None Merged quincy: osd/SnapMapper: fix legacy key conversion in snapmapper class 2022-08-19 03:57:00 UTC
Red Hat Issue Tracker RHCEPH-4767 0 None None None 2022-07-14 22:42:02 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:57:48 UTC

Description Vikhyat Umrao 2022-07-14 22:38:57 UTC
Description of problem:
[RHCS 6] removing snapshots created in nautilus after upgrading to pacific leaves clones around

Version-Release number of selected component (if applicable):

Upstream octopus
Upstream pacific / Downstream RHCS 5.0 and above
Upstream quincy / Downstream RHCS 6.0

How reproducible:

Upstream users
Reproduced in unit test cases - https://gist.github.com/Matan-B/9913f9bf413f7805d7a52dac22a589c1

Upstream trackers:

https://tracker.ceph.com/issues/56147
main branch PR - https://github.com/ceph/ceph/pull/46908

Comment 32 errata-xmlrpc 2023-03-20 18:57:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.