Bug 2102934 - [cephfs][snap_schedule] Adding retention is not working as expected
Summary: [cephfs][snap_schedule] Adding retention is not working as expected
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 5.3
Assignee: Milind Changire
QA Contact: julpark
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2125773 2126049
TreeView+ depends on / blocked
 
Reported: 2022-07-01 03:00 UTC by julpark
Modified: 2023-07-04 14:41 UTC (History)
13 users (show)

Fixed In Version: ceph-16.2.10-40.el8cp
Doc Type: Bug Fix
Doc Text:
.`snap-schedules` are no longer lost on restarts of Ceph Manager services Previously, in-memory databases were not written to persistent storage on every change to the schedule. This caused `snap-schedules` to get lost on restart of Ceph Manager services. With this fix, the in-memory databases are dumped into persistent storage on every change or addition to the `snap-schedules`. Retention now continues to work across restarts of Ceph Manager services.
Clone Of:
: 2125773 (view as bug list)
Environment:
Last Closed: 2023-01-11 17:39:52 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4653 0 None None None 2022-07-01 03:27:12 UTC
Red Hat Product Errata RHSA-2023:0076 0 None None None 2023-01-11 17:40:47 UTC

Description julpark 2022-07-01 03:00:09 UTC
Description of problem:

Snap_schedule retention is getting removed after mgr restarts

Version-Release number of selected component (if applicable):

ceph version 16.2.8-53.el8cp 

How reproducible:


Steps to Reproduce:

1. Schedule a snapshot with snap_shcedule
2. Add Retention to the dir
3. Activate the snap_schedule to the dir
4. Check status of the schedule
5. Restart mgrs
6. Check again status of the schedule

Actual results:

retention data is gone after restart mgrs

Expected results:

retention should stay even though mgr restarts

Additional info:

[{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 5}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}]

[cephuser@ceph-julpark-7eopvp-node7 ~]$ 

[{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": "2022-06-22T18:59:00", "last": "2022-06-22T18:59:00", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true}]

mgr_logs:
022-06-22 20:42:27,281 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h)
2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h) -> {'h': 10}
2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] db result is ('{}',)

Comment 2 Venky Shankar 2022-09-01 05:54:09 UTC
Milind, Please post/link the MR.

Comment 31 errata-xmlrpc 2023-01-11 17:39:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0076


Note You need to log in before you can comment on or make changes to this bug.