Bug 2120491
Summary: | CephFS: mgr/volumes: Intermittent ParsingError failure in mgr/volumes module during "clone cancel" | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kotresh HR <khiremat> |
Component: | CephFS | Assignee: | Kotresh HR <khiremat> |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 5.2 | CC: | ceph-eng-bugs, cephqe-warriors, hyelloji, tserlin, vshankar |
Target Milestone: | --- | ||
Target Release: | 5.3z1 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.10-100.el8cp | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2023-02-28 10:05:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Kotresh HR
2022-08-23 05:00:48 UTC
Hi I have tried creating and canceling of clone operation for 30 times and i haven't hit the issue. Attached the mgr logs for the same Steps Followed : [root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# ceph versions { "mon": { "ceph version 16.2.10-109.el8cp (167b05ebd8472e32b90eb52d06b9714d05fe3fd3) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.10-109.el8cp (167b05ebd8472e32b90eb52d06b9714d05fe3fd3) pacific (stable)": 2 }, "osd": { "ceph version 16.2.10-109.el8cp (167b05ebd8472e32b90eb52d06b9714d05fe3fd3) pacific (stable)": 12 }, "mds": { "ceph version 16.2.10-109.el8cp (167b05ebd8472e32b90eb52d06b9714d05fe3fd3) pacific (stable)": 7 }, "overall": { "ceph version 16.2.10-109.el8cp (167b05ebd8472e32b90eb52d06b9714d05fe3fd3) pacific (stable)": 24 } } [root@ceph-amk-bootstrap-0q5w8t-node7 ~]# ceph fs subvolumegroup create cephfs subvolgroup_clone_cancel_1 [root@ceph-amk-bootstrap-0q5w8t-node7 ~]# ceph fs subvolume create cephfs subvol_clone_cancel --size 5368706371 --group_name subvolgroup_clone_cancel_1 [root@ceph-amk-bootstrap-0q5w8t-node7 ~]# ceph fs subvolume ls cephfs --group_name subvolgroup_clone_cancel_1 [ { "name": "subvol_clone_cancel" } ] [root@ceph-amk-bootstrap-0q5w8t-node7 ~]# ceph fs subvolume getpath cephfs subvol_clone_cancel subvolgroup_clone_cancel_1 /volumes/subvolgroup_clone_cancel_1/subvol_clone_cancel/ce7d0ce1-726d-41b8-9d66-7d21eba75c9a [root@ceph-amk-bootstrap-0q5w8t-node7 ~]# mkdir /mnt/cephfs_clone_cancel [root@ceph-amk-bootstrap-0q5w8t-node7 ~]# ceph fs subvolume getpath cephfs subvol_clone_cancel subvolgroup_clone_cancel_1 /volumes/subvolgroup_clone_cancel_1/subvol_clone_cancel/ce7d0ce1-726d-41b8-9d66-7d21eba75c9a [root@ceph-amk-bootstrap-0q5w8t-node7 ~]# ceph-fuse /mnt/cephfs_clone_cancel -r /volumes/subvolgroup_clone_cancel_1/subvol_clone_cancel/ce7d0ce1-726d-41b8-9d66-7d21eba75c9a ceph-fuse[34680]: starting ceph client 2023-02-01T05:30:38.096-0500 7f67524de3c0 -1 init, newargv = 0x556d4fba03a0 newargc=15 ceph-fuse[34680]: starting fuse Wrote date into the volume and created a snapshot root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# ceph fs subvolume snapshot create cephfs subvol_clone_cancel snap_2 --group_name subvolgroup_clone_cancel_1 [root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# for i in {1..30};do echo $i; ceph fs subvolume snapshot clone cephfs subvol_clone_cancel snap_2 clone_status_$i --group_name subvolgroup_clone_cancel_1;ceph fs clone cancel cephfs clone_status_$i;ceph fs subvolume snapshot info cephfs subvol_clone_cancel snap_2 --group_name subvolgroup_clone_cancel_1;echo "##########################################";done 1 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 2 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 3 Error EEXIST: subvolume 'clone_status_3' exists Error EINVAL: cannot cancel -- clone finished (check clone status) { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 4 Error EEXIST: subvolume 'clone_status_4' exists Error EINVAL: cannot cancel -- clone finished (check clone status) { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 5 Error EEXIST: subvolume 'clone_status_5' exists Error EINVAL: cannot cancel -- clone finished (check clone status) { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 6 Error EEXIST: subvolume 'clone_status_6' exists Error EINVAL: cannot cancel -- clone finished (check clone status) { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 7 Error EEXIST: subvolume 'clone_status_7' exists Error EINVAL: cannot cancel -- clone finished (check clone status) { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 8 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 9 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 10 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 11 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 12 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 13 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 14 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 15 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 16 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 17 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 18 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 19 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 20 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 21 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 22 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 23 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 24 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 25 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 26 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 27 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 28 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 29 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## 30 { "created_at": "2023-02-01 10:52:36.653583", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } ########################################## [root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# ceph fs clone status cephfs clone_status_20 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } [root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 180 GiB 92 GiB 88 GiB 88 GiB 49.04 TOTAL 180 GiB 92 GiB 88 GiB 88 GiB 49.04 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 20 GiB cephfs.cephfs.meta 2 32 34 MiB 51 103 MiB 0.17 20 GiB cephfs.cephfs.data 3 32 9.0 GiB 5.21k 27 GiB 30.69 20 GiB cephfs.cephfs_1.meta 4 32 80 MiB 43 240 MiB 0.38 20 GiB cephfs.cephfs_1.data 5 128 20 GiB 20.48k 60 GiB 49.66 20 GiB rbd_io 6 64 4.2 MiB 23 13 MiB 0.02 20 GiB cephfs.cephfs_2.meta 7 32 3.2 KiB 22 96 KiB 0 20 GiB cephfs.cephfs_2.data 8 32 0 B 0 0 B 0 20 GiB [root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# ceph crash ls [root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# [root@ceph-amk-bootstrap-0q5w8t-node7 cephfs_clone_cancel]# for i in {8..30};do echo $i;ceph fs clone status cephfs clone_status_$i;echo "##########################################";done 8 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 9 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 10 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 11 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 12 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 13 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 14 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 15 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 16 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 17 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 18 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 19 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 20 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 21 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 22 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 23 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 24 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 25 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 26 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 27 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 28 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 29 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## 30 { "status": { "state": "canceled", "source": { "volume": "cephfs", "subvolume": "subvol_clone_cancel", "snapshot": "snap_2", "group": "subvolgroup_clone_cancel_1" }, "failure": { "errno": "4", "error_msg": "user interrupted clone operation" } } } ########################################## Regards, Amarnath Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0980 |