Description of problem (please be detailed as possible and provide log snippests): 1. CephFS clone creation have a limit of 4 parallel clones at a time and rest of the clone create requests are queued. This makes CephFS cloning very slow when there is large amount of clones being created. 2. CephCSI/Kubernetes storage does have a mechanism to delete in-progress clones and deletion of corresponding kubernetes object pvc may lead to stale resource. Due to the above reasons, we are seeing a lot of customer cases with stale cephfs clones. Example: - https://bugzilla.redhat.com/show_bug.cgi?id=2148365 - https://bugzilla.redhat.com/show_bug.cgi?id=2160422 - https://bugzilla.redhat.com/show_bug.cgi?id=2182168 This situation requires manual clean up. Version of all relevant components (if applicable): All supported ODF backing ceph versions Is there any workaround available to the best of your knowledge? no Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? yes Steps to Reproduce: 1. create a PVC 2. create a snapshot and restore it to pvc 3. repeat step 2 many times 4. delete all PVCs and snapshot, restart cephfs provisioner pod. Actual results: A lot of in-progress cephfs clones and completed clone stale resources. Expected results: Zero in-progress cephfs clones or stale resources. Additional info: We can avoid this situation if cephfs clone create requests are not queued as it happens currently. Preferable solutions: 1. Have `ceph fs subvolume snapshot clone` command reject clones if number of in-progress clones == max_concurrent_clones and provide a flag to allow pending clones `--max_pending_clones=<int>`(default value 0) 2. Add an option to `ceph fs subvolume snapshot clone` to limit number of pending clones `--max_pending_clones=<int>`(default value infinity). (Option 2 would require changes at cephcsi and maybe other components). CephCSI and Kuberenetes Storage inherently will retry request with exponential backoff so even if few requests fail, there will be a retry and eventual completion of cephfs clone. By not allowing clones to be queued up in pending state, we avoid stale resources and inform users exactly why the clones are taking so much time.
Please update the RDT flag/text appropriately.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:8676