Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2349154

Summary: [RFE] Source information (subvol/snapshot) of clone is removed from .meta file after cloning has been completed.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Raimund Sacherer <rsachere>
Component: CephFSAssignee: Rishabh Dave <ridave>
Status: CLOSED ERRATA QA Contact: sumr
Severity: medium Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 7.1CC: ceph-eng-bugs, cephqe-warriors, mamohan, ridave, rpollack, vshankar
Target Milestone: ---Keywords: FutureFeature
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-43 Doc Type: Enhancement
Doc Text:
.Source information of clone subvolumes is now preserved Previously, after cloning was completed, the source information (subvolume or snapshot) of the clone was removed from the `.meta` file. As a result, when users ran the `subvolume info` command for a clone subvolume, they could not view details about its source. With this enhancement, source information for a clone subvolume is now preserved even after cloning is complete. This allows the `subvolume info` command to include details about the source subvolume in its output, making it easier for users to find and view the origin of a clone.
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 06:54:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2388233    

Description Raimund Sacherer 2025-03-01 13:41:38 UTC
Description of problem:

With some CU we have to help clean up orphaned subvolumes, oftentimes this happens when OCP has cloning issues and we create a lot of pendings which get eventually created but then never consumed by OCP (current Customer case, around 100TB of data in 3700 unused subvolumes, most of them clones. With RBD we can follow a chain of volume->snap->clone->snap->clone etc. but with CephFS this data is not available. 

During cloning we can see source subvolume and snapshot data, but those get lost as soon as the clone has finished:

Test:
```
[quickcluster@mgmt-0 ceph-odf]$ ./cephfs-volume-tool --list-clones

 *********************************************************************************************
  Please note that this is not to be treated as a Red Hat official binary / software,         
  and feel free to go through the source code to look for what's happening behind the scenes. 
 *********************************************************************************************

./cephfs-volume-tool - Version: 4

For details please see the KCS https://access.redhat.com/solutions/7080701

[100%] 3 of 3 subvolumes. Found: 'P': 0 'In-P': 1 'C': 0. Action taken: 0 'P' canceled. 0 'In-P' canceled. 0 'C' Removed.                                                                                                                                                                                                                                                                                                                                                                                  
Log file:
cat ./cephfs-volume-tool_2025-03-01T08:25:29-05:00.log | column -t -s $'\t'

Cmd file:
cat ./cephfs-volume-tool_2025-03-01T08:25:29-05:00.cmd

Finished.
[quickcluster@mgmt-0 ceph-odf]$ cat ./cephfs-volume-tool_2025-03-01T08:25:29-05:00.log | column -t -s $'\t'
2025-03-01T08:25:30-05:00  not-canceling  test-delete                  95  'clone status' is not supported on volume type 'subvolume'.
2025-03-01T08:25:30-05:00  not-canceling  test-clone-from-test-delete  0   complete
2025-03-01T08:25:31-05:00  not-canceling  test-clone2-from-clone       0   in-progress
```

We see cloning is in progress.


Looking at the new .meta file:
```
[quickcluster@mgmt-0 ceph-odf]$ sudo cat   /mnt/cephfs/volumes/csi/test-clone2-from-clone/.meta
[GLOBAL]
version = 2
type = clone
path = /volumes/csi/test-clone2-from-clone/55e13b9a-780b-4534-8b1c-16ed122fb5d5
state = in-progress

[source]
volume = cephfs
group = csi
subvolume = test-clone-from-test-delete
snapshot = test-clone-from-test-delete@snapshot
```

We see all the relevant data in `[source]`.



In addition to that I found this strange behaviour during the cloning process:
```
[quickcluster@mgmt-0 ceph-odf]$ sudo ceph fs clone status --clone_name test-clone2-from-clone --vol_name cephfs --group_name csi 
{
  "status": {
    "state": "in-progress",
    "source": {
      "volume": "cephfs",
      "subvolume": "test-clone-from-test-delete",
      "snapshot": "test-clone-from-test-delete@snapshot",
      "group": "csi"
    },
    "progress_report": {
      "percentage cloned": "92.233%",
      "amount cloned": "1.5G/1.7G",                        <---- 1.5G out of 1.7G, that does look right.
      "files cloned": "60k/57k"                            <---- 60k out of 57k? this does *not* look right.
    }
  }
}
```

After the cloning has finished, checking again the .meta file:
```
[quickcluster@mgmt-0 ceph-odf]$ sudo cat   /mnt/cephfs/volumes/csi/test-clone2-from-clone/.meta
[GLOBAL]
version = 2
type = clone
path = /volumes/csi/test-clone2-from-clone/55e13b9a-780b-4534-8b1c-16ed122fb5d5
state = complete
```

All source information from where the clone was created is gone. 


Engineering Q:

1. Please retain the source data information for the clone in some form in the meta file, this can help establishing from which original subvolume the clone was taken.
2. Check the files cloned output, which seems weird.


Version-Release number of selected component (if applicable):
This cluster has a test image from Guillaume where I am testing a new feature he developed (creating separate DB and BLOCK on the same NVMe device), therefore version is:
ceph version 19.3.0-7350-g7fcd9cba (7fcd9cbac04f022c8a95dfed302e308881bc1da4) squid (dev)


But at least the issue that source information is gone from .meta files after the cloning has completed I have seen since RHCS 5 days.


How reproducible:
Always

Steps to Reproduce:
N/A

Actual results:
N/A

Expected results:
N/A

Additional info:

Comment 1 Raimund Sacherer 2025-03-01 13:45:49 UTC
The info should also be available in the subvolume info output:

```
[quickcluster@mgmt-0 ceph-odf]$ sudo ceph fs subvolume info --sub_name test-clone2-from-clone --vol_name cephfs --group_name csi  
{
    "atime": "2025-02-25 12:56:04",
    "bytes_pcent": "undefined",
    "bytes_quota": "infinite",
    "bytes_used": 1876279332,
    "created_at": "2025-03-01 13:21:02",
    "ctime": "2025-03-01 13:32:16",
    "data_pool": "cephfs_data",
    "earmark": "",
    "features": [
        "snapshot-clone",
        "snapshot-autoprotect",
        "snapshot-retention"
    ],
    "flavor": 2,
    "gid": 0,
    "mode": 16877,
    "mon_addrs": [
        "10.0.89.226:6789",
        "10.0.92.132:6789",
        "10.0.91.214:6789",
        "10.0.94.9:6789",
        "10.0.88.182:6789"
    ],
    "mtime": "2025-03-01 13:15:59",
    "path": "/volumes/csi/test-clone2-from-clone/55e13b9a-780b-4534-8b1c-16ed122fb5d5",
    "pool_namespace": "",
    "state": "complete",
    "type": "clone",
    "uid": 0
}
```

Comment 13 errata-xmlrpc 2026-01-29 06:54:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536