Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2429623

Summary: [GSS][IBM_Support] Shallow Clone does not work as expected when an RWX clone is in progress.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Bipin Kunal <bkunal>
Component: CephFSAssignee: Venky Shankar <vshankar>
Status: CLOSED UPSTREAM QA Contact: sumr
Severity: urgent Docs Contact: Rivka Pollack <rpollack>
Priority: urgent    
Version: 8.1CC: ceph-eng-bugs, cephqe-warriors, mamohan, muagarwa, ngangadh, prallabh, rpollack, vshankar
Target Milestone: ---Flags: rpollack: needinfo? (vshankar)
Target Release: 9.0z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Cause: The cephfs python binding which is used by the asynchronous cloner in the volumes module (mgr/volumes), invokes the client library API holding the Python Global Interpreter Lock (GIL). Consequence: Since the GIL is held for a prolonged time duration, various other subvolume operations via the volumes module are blocked on acquiring the GIL. Fix: Invoke the cephfs client API without holding the GIL. Result: Various volumes module operations can progress even when asynchronous clone is ongoing.
Story Points: ---
Clone Of:
: 2429624 (view as bug list) Environment:
Last Closed: 2026-03-05 07:25:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2388233, 2429624    

Description Bipin Kunal 2026-01-14 16:19:43 UTC
Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:

Shallow Clone is not working as expected when an RWX clone is in progress. It takes same time as for an RWX clone.

 

The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):

AWS

 

The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):

Internal

 

The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):

OCP 4.19.10

ODF 4.19.8

 

Does this issue impact your ability to continue to work with the product?

Yes

 

Is there any workaround available to the best of your knowledge?

No

 

Can this issue be reproduced? If so, please provide the hit rate

Yes, 100%

 

Steps to Reproduce:

1. Create a CephFS PVC, add some (10+ GB) data to the PVC. 

2. Create a snapshot.

3. Restore the snapshot as RWX and then restore the snapshot as ROX while RWX is in progress.

4. ROX clone doesn't complete instantly and takes time.

 

Actual results:

ROX clone takes time to complete.

 

Expected results:

ROX clone should complete instantly.

Additional info:

 NA

Comment 2 Storage PM bot 2026-01-14 16:19:53 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 9 Red Hat Bugzilla 2026-03-05 07:25:28 UTC
This product has been discontinued or is no longer tracked in Red Hat Bugzilla.