Bug 2190161

Summary: Support to reject CephFS clones if cloner threads are not available.
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Rakshith <rar>
Component: cephAssignee: Neeraj Pratap Singh <neesingh>
ceph sub component: CephFS QA Contact: Elad <ebenahar>
Status: ASSIGNED --- Docs Contact:
Severity: medium    
Priority: unspecified CC: bniver, muagarwa, odf-bz-bot, sostapov
Version: 4.9   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2196829 (view as bug list) Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2196829    
Bug Blocks:    

Description Rakshith 2023-04-27 10:50:05 UTC
Description of problem (please be detailed as possible and provide log
snippests):

1. CephFS clone creation have a limit of 4 parallel clones at a time and rest
of the clone create requests are queued. This makes CephFS cloning very slow when there is large amount of clones being created.

2. CephCSI/Kubernetes storage does have a mechanism to delete in-progress clones and deletion of corresponding kubernetes object pvc may lead to stale resource.

Due to the above reasons, we are seeing a lot of customer cases with stale
cephfs clones.
Example: 
- https://bugzilla.redhat.com/show_bug.cgi?id=2148365
- https://bugzilla.redhat.com/show_bug.cgi?id=2160422
- https://bugzilla.redhat.com/show_bug.cgi?id=2182168

This situation requires manual clean up. 

Version of all relevant components (if applicable):
All supported ODF backing ceph versions

Is there any workaround available to the best of your knowledge?
no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes

Can this issue reproduce from the UI?
yes

Steps to Reproduce:
1. create a PVC
2. create a snapshot and restore it to pvc
3. repeat step 2 many times
4. delete all PVCs and snapshot, restart cephfs provisioner pod.

Actual results:
A lot of in-progress cephfs clones and completed clone stale resources.

Expected results:
Zero in-progress cephfs clones or stale resources.

Additional info:

We can avoid this situation if cephfs clone create requests are not queued as it
happens currently.

Preferable solutions: 

1. Have `ceph fs subvolume snapshot clone` command reject clones if number of in-progress clones == max_concurrent_clones and provide a flag to allow pending clones
`--max_pending_clones=<int>`(default value 0)

2. Add an option to `ceph fs subvolume snapshot clone` to limit number of pending clones `--max_pending_clones=<int>`(default value infinity).
(Option 2 would require changes at cephcsi and maybe other components).

CephCSI and Kuberenetes Storage inherently will retry request with exponential backoff
so even if few requests fail, there will be a retry and eventual completion of cephfs clone.

By not allowing clones to be queued up in pending state, we avoid stale resources and 
inform users exactly why the clones are taking so much time.