.`snap-schedules` are no longer lost on restarts of Ceph Manager services
Previously, in-memory databases were not written to persistent storage on every change to the schedule. This caused `snap-schedules` to get lost on restart of Ceph Manager services.
With this fix, the in-memory databases are dumped into persistent storage on every change or addition to the `snap-schedules`. Retention now continues to work across restarts of Ceph Manager services.
Description of problem:
Snap_schedule retention is getting removed after mgr restarts
Version-Release number of selected component (if applicable):
ceph version 16.2.8-53.el8cp
How reproducible:
Steps to Reproduce:
1. Schedule a snapshot with snap_shcedule
2. Add Retention to the dir
3. Activate the snap_schedule to the dir
4. Check status of the schedule
5. Restart mgrs
6. Check again status of the schedule
Actual results:
retention data is gone after restart mgrs
Expected results:
retention should stay even though mgr restarts
Additional info:
[{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 5}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}]
[cephuser@ceph-julpark-7eopvp-node7 ~]$
[{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": "2022-06-22T18:59:00", "last": "2022-06-22T18:59:00", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true}]
mgr_logs:
022-06-22 20:42:27,281 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h)
2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h) -> {'h': 10}
2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] db result is ('{}',)
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2023:0076
Description of problem: Snap_schedule retention is getting removed after mgr restarts Version-Release number of selected component (if applicable): ceph version 16.2.8-53.el8cp How reproducible: Steps to Reproduce: 1. Schedule a snapshot with snap_shcedule 2. Add Retention to the dir 3. Activate the snap_schedule to the dir 4. Check status of the schedule 5. Restart mgrs 6. Check again status of the schedule Actual results: retention data is gone after restart mgrs Expected results: retention should stay even though mgr restarts Additional info: [{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 5}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}] [cephuser@ceph-julpark-7eopvp-node7 ~]$ [{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": "2022-06-22T18:59:00", "last": "2022-06-22T18:59:00", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true}] mgr_logs: 022-06-22 20:42:27,281 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h) 2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h) -> {'h': 10} 2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] db result is ('{}',)