Description of problem: Snap_schedule retention is getting removed after mgr restarts Version-Release number of selected component (if applicable): ceph version 16.2.8-53.el8cp How reproducible: Steps to Reproduce: 1. Schedule a snapshot with snap_shcedule 2. Add Retention to the dir 3. Activate the snap_schedule to the dir 4. Check status of the schedule 5. Restart mgrs 6. Check again status of the schedule Actual results: retention data is gone after restart mgrs Expected results: retention should stay even though mgr restarts Additional info: [{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 5}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}] [cephuser@ceph-julpark-7eopvp-node7 ~]$ [{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": "2022-06-22T18:59:00", "last": "2022-06-22T18:59:00", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true}] mgr_logs: 022-06-22 20:42:27,281 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h) 2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h) -> {'h': 10} 2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] db result is ('{}',)
Milind, Please post/link the MR.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0076