Issue is originally discussed here: https://github.com/ceph/ceph-csi/issues/1133 Ceph upstream tracker for the same is here: https://tracker.ceph.com/issues/46074 Snip from github discussion: "For cephfs volume, we only create snapshots at volume root. we can disable the special handling for inodes with multiple links. If the special handling is disabled, that can help avoiding the 400 snapshot per-file-system limit" For subvolumes, the intention is that these are used in isolation with each other, and hence hard links across subvolumes should not be a concern. Given this, the handling as discussed above can be relaxed for subvolumes. This bug is filed to track the bug/feature is backported to required Ceph versions that would ship with OCS 4.6. OCS 4.6 has a snapshot limit requirement of 512 as per: https://issues.redhat.com/browse/KNIP-661
Can we get QA_ACK?
Hi Shyam, As part of the fix, I was able to reach 1k snapshots of a subvol on regular intervals with IO's on. I dont see any issues on the filesystem so far and the IO's were on for atleast 2 days now. So, Do we still have any limit on the snapshot ? so that I can reach the max limit and see if we hit any issues.
Hi Shyam, I have tested creating snapshots on subvolume path instead of a directory and it all works fine. No issues seen. Moving to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4144
(In reply to Hemanth Kumar from comment #19) > Hi Shyam, > > I have tested creating snapshots on subvolume path instead of a directory > and it all works fine. No issues seen. > > Moving to verified. Was it tested at scale? How many snaps?