Bug 2102934

Summary: [cephfs][snap_schedule] Adding retention is not working as expected
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: julpark
Component: CephFSAssignee: Milind Changire <mchangir>
Status: CLOSED ERRATA QA Contact: julpark
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.2CC: akraj, ceph-eng-bugs, cephqe-warriors, gfarnum, hyelloji, lithomas, mchangir, mhackett, sostapov, tserlin, vereddy, vshankar, vumrao
Target Milestone: ---   
Target Release: 5.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.10-40.el8cp Doc Type: Bug Fix
Doc Text:
.`snap-schedules` are no longer lost on restarts of Ceph Manager services Previously, in-memory databases were not written to persistent storage on every change to the schedule. This caused `snap-schedules` to get lost on restart of Ceph Manager services. With this fix, the in-memory databases are dumped into persistent storage on every change or addition to the `snap-schedules`. Retention now continues to work across restarts of Ceph Manager services.
Story Points: ---
Clone Of:
: 2125773 (view as bug list) Environment:
Last Closed: 2023-01-11 17:39:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2125773, 2126049    

Description julpark 2022-07-01 03:00:09 UTC
Description of problem:

Snap_schedule retention is getting removed after mgr restarts

Version-Release number of selected component (if applicable):

ceph version 16.2.8-53.el8cp 

How reproducible:


Steps to Reproduce:

1. Schedule a snapshot with snap_shcedule
2. Add Retention to the dir
3. Activate the snap_schedule to the dir
4. Check status of the schedule
5. Restart mgrs
6. Check again status of the schedule

Actual results:

retention data is gone after restart mgrs

Expected results:

retention should stay even though mgr restarts

Additional info:

[{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 5}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}]

[cephuser@ceph-julpark-7eopvp-node7 ~]$ 

[{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {}, "start": "2022-06-22T14:59:00", "created": "2022-06-22T18:53:24", "first": "2022-06-22T18:59:00", "last": "2022-06-22T18:59:00", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true}]

mgr_logs:
022-06-22 20:42:27,281 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h)
2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] parse_retention(10h) -> {'h': 10}
2022-06-22 20:42:27,282 [Dummy-1] [DEBUG] [snap_schedule.fs.schedule] db result is ('{}',)

Comment 2 Venky Shankar 2022-09-01 05:54:09 UTC
Milind, Please post/link the MR.

Comment 31 errata-xmlrpc 2023-01-11 17:39:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0076