This bug was initially created as a copy of Bug #2094822 I am copying this bug because: Description of problem: Clone operations are failing with Assertion Error. When we create more clone in my case i have created 130[root@ceph-amk-bz-2-qa3ps0-node7 _nogroup]# ceph fs clone status cephfs clone_status_142 Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/volumes/fs/operations/versions/__init__.py", line 96, in get_subvolume_object self.upgrade_to_v2_subvolume(subvolume) File "/usr/share/ceph/mgr/volumes/fs/operations/versions/__init__.py", line 57, in upgrade_to_v2_subvolume version = int(subvolume.metadata_mgr.get_global_option('version')) File "/usr/share/ceph/mgr/volumes/fs/operations/versions/metadata_manager.py", line 144, in get_global_option return self.get_option(MetadataManager.GLOBAL_SECTION, key) File "/usr/share/ceph/mgr/volumes/fs/operations/versions/metadata_manager.py", line 138, in get_option raise MetadataMgrException(-errno.ENOENT, "section '{0}' does not exist".format(section)) volumes.fs.exception.MetadataMgrException: -2 (section 'GLOBAL' does not exist) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1446, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/volumes/module.py", line 437, in handle_command return handler(inbuf, cmd) File "/usr/share/ceph/mgr/volumes/module.py", line 34, in wrap return f(self, inbuf, cmd) File "/usr/share/ceph/mgr/volumes/module.py", line 682, in _cmd_fs_clone_status vol_name=cmd['vol_name'], clone_name=cmd['clone_name'], group_name=cmd.get('group_name', None)) File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 622, in clone_status with open_subvol(self.mgr, fs_handle, self.volspec, group, clonename, SubvolumeOpType.CLONE_STATUS) as subvolume: File "/lib64/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/usr/share/ceph/mgr/volumes/fs/operations/subvolume.py", line 72, in open_subvol subvolume = loaded_subvolumes.get_subvolume_object(mgr, fs, vol_spec, group, subvolname) File "/usr/share/ceph/mgr/volumes/fs/operations/versions/__init__.py", line 101, in get_subvolume_object self.upgrade_legacy_subvolume(fs, subvolume) File "/usr/share/ceph/mgr/volumes/fs/operations/versions/__init__.py", line 78, in upgrade_legacy_subvolume assert subvolume.legacy_mode AssertionError Version-Release number of selected component (if applicable): [root@ceph-amk-bz-1-wu0ar7-node7 ~]# ceph versions { "mon": { "ceph version 16.2.8-27.el8cp (b0bd3a6c6f24d3ac855dde96982871257bef866f) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.8-27.el8cp (b0bd3a6c6f24d3ac855dde96982871257bef866f) pacific (stable)": 2 }, "osd": { "ceph version 16.2.8-27.el8cp (b0bd3a6c6f24d3ac855dde96982871257bef866f) pacific (stable)": 12 }, "mds": { "ceph version 16.2.8-27.el8cp (b0bd3a6c6f24d3ac855dde96982871257bef866f) pacific (stable)": 3 }, "overall": { "ceph version 16.2.8-27.el8cp (b0bd3a6c6f24d3ac855dde96982871257bef866f) pacific (stable)": 20 } } [root@ceph-amk-bz-1-wu0ar7-node7 ~]# How reproducible: 1/1 Steps to Reproduce: Create Subvolumegroup ceph fs subvolumegroup create cephfs subvolgroup_clone_status_1 Create Subvolume ceph fs subvolume create cephfs subvol_clone_status --size 5368706371 --group_name subvolgroup_clone_status_1 Kernel mount the volume and fill data Create Snapshot ceph fs subvolume snapshot create cephfs subvol_clone_status snap_1 --group_name subvolgroup_clone_status_1 Create 200 Clones out of the above subvolume ceph fs subvolume snapshot clone cephfs subvol_clone_status snap_1 clone_status_1 --group_name subvolgroup_clone_status_1 Actual results: Expected results: Should fail gracefully Additional info:
Hi Kotresh, Created Subovlume with fewer data. Created more than 150 clones out of the subvolume. Did not observe any errors Verified in Version: [root@ceph-amk-bz-2-8zczch-node7 ~]# ceph versions { "mon": { "ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 3 }, "mgr": { "ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 2 }, "osd": { "ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 12 }, "mds": { "ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 3 }, "overall": { "ceph version 17.2.5-8.el9cp (f2be93d8b38077bd58e70cf252dbbb4cf49e95e4) quincy (stable)": 20 } } A detailed document with all the commands: https://docs.google.com/document/d/1VuR2PlYrUwDWk6Aw1kKGxX18HcUZZIQn92yZgkmNhbI/edit
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360