Bug 2227806 - snap-schedule: allow retention spec to specify max number of snaps to retain
Summary: snap-schedule: allow retention spec to specify max number of snaps to retain
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3z6
Assignee: Milind Changire
QA Contact: sumr
Ranjini M N
URL:
Whiteboard:
Depends On:
Blocks: 2258797
TreeView+ depends on / blocked
 
Reported: 2023-07-31 14:45 UTC by Milind Changire
Modified: 2024-02-08 16:50 UTC (History)
8 users (show)

Fixed In Version: ceph-16.2.10-220.el8cp
Doc Type: Bug Fix
Doc Text:
.The `snap-schedule` module retains a defined number of snapshots With this release, `snap-schedule` module supports a new retention specification to retain a user-defined number of snapshots. For example, if you have specified 50 snapshots to retain irrespective of the snapshot creation cadence, then the snapshot is pruned after a new snapshot is created. The actual number of snapshots retained is 1 less than the maximum number specified. In this example, 49 snapshots are retained so that there is a margin of 1 snapshot that can be created on the file system on the next iteration. The retained snapshot avoids breaching the system configured limit of `mds_max_snaps_per_dir`. IMPORTANT: Be careful when configuring `mds_max_snaps_per_dir` and snapshot scheduling limits to avoid unintentional deactivation of snapshot schedules due to the file system returning a "Too many links" error if the `mds_max_snaps_per_dir` limit is breached.
Clone Of:
: 2227807 2227809 (view as bug list)
Environment:
Last Closed: 2024-02-08 16:49:59 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7110 0 None None None 2023-07-31 14:45:50 UTC
Red Hat Product Errata RHSA-2024:0745 0 None None None 2024-02-08 16:50:04 UTC

Description Milind Changire 2023-07-31 14:45:14 UTC
Description of problem:
Along with daily, weekly, monthly and yearly snaps, users also need a way to mention the max number of snaps they need to retain if they feel that MAX_SNAPS_PER_PATH (50) is insufficient for their purpose.

eg. the new snap-schedule with the retention spec could be: /PATH 1d1m 75n

where a max of 75 snaps from the 1d1m snap-schedule will be retained. If number of snaps ('n') are not specified as the retention spec then the default of MAX_SNAPS_PER_PATH (50) should be applicable.

NOTE: the max number of snaps possible are also a function of the system-wide config named mds_max_snaps_per_dir, which currently defaults to 100

So, if the number of snaps for a path/dir that need to be created exceeds 100, then a user would first need to tweak the value for mds_max_snaps_per_dir before updating the snap-schedule retention spec beyond 100.

NOTE: Since mds_max_snaps_per_dir is a run-time system-wide spec, any change to that config will immediately affect existing snap-schedule retention specs.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2023-07-31 14:45:26 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Hemanth Kumar 2023-08-01 07:47:25 UTC
5.3z4 is released - Changing the target release to 5.3z5

Comment 3 Venky Shankar 2023-08-09 13:50:28 UTC
Milind, please post an MR.

Comment 4 Scott Ostapovicz 2023-08-17 14:12:32 UTC
Retargeting to 5.3 z6 as it was not complete in the 5.3 z5 window.

Comment 10 sumr 2024-01-02 10:03:28 UTC
Verified the fix as per QA Test Plan on ceph build 16.2.10-225.el8cp , no issues seen.

Logs - http://magna002.ceph.redhat.com/ceph-qe-logs/suma/bz_verify/bz_2227806_verification.log

Comment 12 errata-xmlrpc 2024-02-08 16:49:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 Security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:0745


Note You need to log in before you can comment on or make changes to this bug.