Bug 2130434
| Summary: | CephFS: mgr/volumes: Intermittent ParsingError failure in mgr/volumes module during "clone cancel" | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kotresh HR <khiremat> |
| Component: | CephFS | Assignee: | Kotresh HR <khiremat> |
| Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
| Severity: | medium | Docs Contact: | Masauso Lungu <mlungu> |
| Priority: | unspecified | ||
| Version: | 5.2 | CC: | ceph-eng-bugs, cephqe-warriors, hyelloji, mlungu, pasik, vereddy |
| Target Milestone: | --- | ||
| Target Release: | 6.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-17.2.3-45.el9cp | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-03-20 18:58:27 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Kotresh HR
2022-09-28 07:24:14 UTC
Verifed with different clone names and clone cancel operation.
did not observer any parser error.
Version tested :
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph versions
{
"mon": {
"ceph version 17.2.3-49.el9cp (70ade2732bfe4eb64129e97862bf0b1744c800d2) quincy (stable)": 3
},
"mgr": {
"ceph version 17.2.3-49.el9cp (70ade2732bfe4eb64129e97862bf0b1744c800d2) quincy (stable)": 2
},
"osd": {
"ceph version 17.2.3-49.el9cp (70ade2732bfe4eb64129e97862bf0b1744c800d2) quincy (stable)": 12
},
"mds": {
"ceph version 17.2.3-49.el9cp (70ade2732bfe4eb64129e97862bf0b1744c800d2) quincy (stable)": 3
},
"overall": {
"ceph version 17.2.3-49.el9cp (70ade2732bfe4eb64129e97862bf0b1744c800d2) quincy (stable)": 20
}
}
[root@ceph-amk-bz-o4n2a9-node7 ~]#
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot create cephfs subvol_1 snap_1 --group_name subvolgroup_1
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 "123**\\//@clone_1112312" --group_name subvolgroup_1
Error ENOTSUP: operation 'clone_internal' is not allowed on subvolume '123**\//@clone_1112312' of type subvolume
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 "clone_1" --group_name subvolgroup_1
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
"created_at": "2022-10-12 19:17:18.005749",
"data_pool": "cephfs.cephfs.data",
"has_pending_clones": "yes",
"pending_clones": [
{
"name": "clone_1"
}
]
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone cancel cephfs clone_1
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
"created_at": "2022-10-12 19:17:18.005749",
"data_pool": "cephfs.cephfs.data",
"has_pending_clones": "no"
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 "123**\\//@clone_1112312" --group_name subvolgroup_1
Error EEXIST: subvolume '123**\//@clone_1112312' exists
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume ls cephfs
[
{
"name": "clone_1"
},
{
"name": "123**\\"
}
]
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone cancel cephfs clone_1
Error EINVAL: cannot cancel -- clone finished (check clone status)
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone status cephfs clone_1
{
"status": {
"state": "canceled",
"source": {
"volume": "cephfs",
"subvolume": "subvol_1",
"snapshot": "snap_1",
"group": "subvolgroup_1"
},
"failure": {
"errno": "4",
"error_msg": "user interrupted clone operation"
}
}
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 "123**\\@clone_1112312" --group_name subvolgroup_1
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
"created_at": "2022-10-12 19:17:18.005749",
"data_pool": "cephfs.cephfs.data",
"has_pending_clones": "yes",
"pending_clones": [
{
"name": "123**\\@clone_1112312"
}
]
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone status cephfs 123**\\@clone_1112312
{
"status": {
"state": "in-progress",
"source": {
"volume": "cephfs",
"subvolume": "subvol_1",
"snapshot": "snap_1",
"group": "subvolgroup_1"
}
}
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 "123**\\ @clone_1112312" --group_name subvolgroup_1
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone status cephfs "123**\\ @clone_1112312"
{
"status": {
"state": "in-progress",
"source": {
"volume": "cephfs",
"subvolume": "subvol_1",
"snapshot": "snap_1",
"group": "subvolgroup_1"
}
}
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone cancel cephfs "123**\\ @clone_1112312"
[root@ceph-amk-bz-o4n2a9-node7 ~]#
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone cancel cephfs "123**\\@clone_1112312"
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone status cephfs 123**\\@clone_1112312
{
"status": {
"state": "canceled",
"source": {
"volume": "cephfs",
"subvolume": "subvol_1",
"snapshot": "snap_1",
"group": "subvolgroup_1"
},
"failure": {
"errno": "4",
"error_msg": "user interrupted clone operation"
}
}
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume ls cephfs
[
{
"name": "clone_1"
},
{
"name": "123**\\@clone_1112312"
},
{
"name": "123**\\"
},
{
"name": "123**\\ @clone_1112312"
}
]
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 "123**\\n \t @clone_1112312" --group_name subvolgroup_1
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone cancel cephfs "123**\\n \t @clone_1112312"
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs clone status cephfs "123**\\n \t @clone_1112312"
{
"status": {
"state": "canceled",
"source": {
"volume": "cephfs",
"subvolume": "subvol_1",
"snapshot": "snap_1",
"group": "subvolgroup_1"
},
"failure": {
"errno": "4",
"error_msg": "user interrupted clone operation"
}
}
}
[root@ceph-amk-bz-o4n2a9-node7 ~]# ceph fs subvolume ls cephfs
[
{
"name": "clone_1"
},
{
"name": "123**\\@clone_1112312"
},
{
"name": "123**\\n \\t @clone_1112312"
},
{
"name": "123**\\"
},
{
"name": "123**\\ @clone_1112312"
}
]
[root@ceph-amk-bz-o4n2a9-node7 ~]#
Regards,
Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360 |