Bug 1330732 - [RFE] Adding an optional limit to RDB snapshot
Summary: [RFE] Adding an optional limit to RDB snapshot
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD
Version: 1.3.2
Hardware: All
OS: Linux
Target Milestone: rc
: 3.0
Assignee: Jason Dillaman
QA Contact: Shreekar
Erin Donnelly
Depends On:
Blocks: 1494421
TreeView+ depends on / blocked
Reported: 2016-04-26 20:16 UTC by Mike Hackett
Modified: 2019-11-14 07:52 UTC (History)
6 users (show)

Fixed In Version: RHEL: ceph-12.1.2-1.el7cp Ubuntu: ceph_12.1.2-2redhat1xenial
Doc Type: Enhancement
Doc Text:
.Option to add a limit on RBD snapshots A new option to set a limit on the number of snapshots on a RADOS Block Device (RBD) image is now supported. Use the option `snap limit --limit` with the `rbd` command to set the limit.
Clone Of:
Last Closed: 2017-12-05 23:29:38 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 15706 None None None 2016-05-03 12:29:33 UTC
Red Hat Product Errata RHBA-2017:3387 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix and enhancement update 2017-12-06 03:03:45 UTC

Description Mike Hackett 2016-04-26 20:16:58 UTC
1. Proposed title of this feature request

 Adding an optional limit to RDB snapshot

2. Who is the customer behind the request?

Account name: 315059 / Commerzbank AG

SRM customer: yes/no

TAM customer: yes/no

Strategic Customer: yes/no


Commerzbank AG is a well known Ceph account


3. What is the nature and description of the request?

The aim is to prevent the accidental creation of an infinite number of RBD snapshots in a given pool, via assigning an optional (disabled by default) limit to snapshot.
Snapshot limit should be disabled by default, to ensure no backward compatibility issue arises.

4. Why does the customer need this? (List the business requirements here)

To prevent accidentally create a infinite number of snapshot. 
Purge jobs on snapshots are very cost intense operations. In worst case the whole cluster remains unusable. 
This feature would minimize the risc of such incidents.  

5. How would the customer like to achieve this? (List the functional requirements here)

Assigning a limit to snapshot to a given pool via an `rdb` command

6. For each functional requirement listed in question 5, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.

 - assign a snapshot limit
 - create snaphost in a loop
 - the `rdb snap create` should fail when the snapshot limit is reached


7. Is there already an existing RFE upstream or in Red Hat bugzilla?

no. But there is a slight discussion in sme-storage : http://post-office.corp.redhat.com/archives/sme-storage/2015-December/msg00001.html

8. Does the customer have any specific timeline dependencies?

As soon as possible. This feature would be a game changer when it comes to production readiness evaluation. 

9. Is the sales team involved in this request and do they have any additional input?

Our sales ambassandor in this case is Herr Herschaft.  

10. List any affected packages or components.


11. Would the customer be able to assist in testing this functionality if implemented?

The customer would be happy to test the feature in case it is available

Comment 6 Jason Dillaman 2016-08-10 19:22:36 UTC
Upstream pull request (Kraken+ release, not RHCS 2.0 series): https://github.com/ceph/ceph/pull/9151

Comment 7 Federico Lucifredi 2016-09-15 08:08:05 UTC
Jason, is this a backport candidate? If not, please adjust target to RHCS 3.

Comment 8 Jason Dillaman 2016-09-15 09:02:02 UTC
No -- it requires both librbd + OSD cls library changes.

Comment 14 errata-xmlrpc 2017-12-05 23:29:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.