Bug 2141061
| Summary: | snap-schedule add command is failing when subvolume argument is provided | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Amarnath <amk> | |
| Component: | CephFS | Assignee: | Milind Changire <mchangir> | |
| Status: | CLOSED WONTFIX | QA Contact: | Hemanth Kumar <hyelloji> | |
| Severity: | high | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 5.2 | CC: | akraj, ceph-eng-bugs, cephqe-warriors, hyelloji, mchangir, tserlin, vereddy, vshankar | |
| Target Milestone: | --- | |||
| Target Release: | 5.3z2 | |||
| Hardware: | Unspecified | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | ceph-16.2.10-109.el8cp | Doc Type: | Known Issue | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2153196 (view as bug list) | Environment: | ||
| Last Closed: | 2023-03-23 05:04:27 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
Moving to 5.3z1. If it is blocker, move it back to 5.3 Hi Milind,
Apologies for delayed respose.
Please find the mgr log snippet
022-12-06T03:43:17.820+0000 7ff045300700 0 [progress INFO root] Processing OSDMap change 392..392
2022-12-06T03:43:17.939+0000 7ff032e09700 0 [volumes INFO mgr_util] scanning for idle connections..
2022-12-06T03:43:17.939+0000 7ff032e09700 0 [volumes INFO mgr_util] cleaning up connections: []
2022-12-06T03:43:18.102+0000 7ff02edc1700 0 [volumes INFO mgr_util] scanning for idle connections..
2022-12-06T03:43:18.102+0000 7ff02edc1700 0 [volumes INFO mgr_util] cleaning up connections: []
2022-12-06T03:43:18.199+0000 7ff02b53a700 0 [volumes INFO mgr_util] scanning for idle connections..
2022-12-06T03:43:18.199+0000 7ff02b53a700 0 [volumes INFO mgr_util] cleaning up connections: []
2022-12-06T03:43:19.647+0000 7ff052d5b700 0 log_channel(cluster) log [DBG] : pgmap v94: 305 pgs: 96 active+undersized, 209 active+clean; 82 MiB data, 1.7 GiB used, 178 GiB / 180 GiB avail
2022-12-06T03:43:21.648+0000 7ff052d5b700 0 log_channel(cluster) log [DBG] : pgmap v95: 305 pgs: 96 active+undersized, 209 active+clean; 82 MiB data, 1.7 GiB used, 178 GiB / 180 GiB avail
2022-12-06T03:43:22.823+0000 7ff045300700 0 [progress INFO root] Processing OSDMap change 392..392
2022-12-06T03:43:23.649+0000 7ff052d5b700 0 log_channel(cluster) log [DBG] : pgmap v96: 305 pgs: 96 active+undersized, 209 active+clean; 82 MiB data, 1.7 GiB used, 178 GiB / 180 GiB avail
2022-12-06T03:43:25.011+0000 7ff053d5d700 0 log_channel(audit) log [DBG] : from='client.18432 -' entity='client.admin' cmd=[{"prefix": "fs snap-schedule add", "path": "/volumes/_nogroup/subvol_2/fa3a56c5-cbd4-455f-87e0-d189b336e09a", "snap_schedule": "4h", "start": "2022-10-09T14:00:00", "fs": "cephfs", "subvol": "subvol_2", "target": ["mon-mgr", ""]}]: dispatch
2022-12-06T03:43:25.012+0000 7ff06921c700 -1 no module 'fs'
2022-12-06T03:43:25.012+0000 7ff06921c700 -1 mgr handle_command module 'snap_schedule' command handler threw exception: Module not found
2022-12-06T03:43:25.012+0000 7ff06921c700 -1 mgr.server reply reply (22) Invalid argument Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1448, in _handle_command
return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 414, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/snap_schedule/module.py", line 136, in snap_schedule_add
abs_path = self.resolve_subvolume_path(use_fs, subvol, path)
File "/usr/share/ceph/mgr/snap_schedule/module.py", line 44, in resolve_subvolume_path
fs, subvol)
File "/usr/share/ceph/mgr/mgr_module.py", line 1860, in remote
args, kwargs)
ImportError: Module not found
I have attached active mgr log
Regards,
Amarnath
|
Description of problem: snap-schedule add command is failing when subvolume argument is provided Version-Release number of selected component (if applicable): How reproducible: 1/1 Steps to Reproduce: 1.Create subvolume and add snap-schedule When we tried the command it is failing with traceback [root@ceph-amk-doc-bz-9o0073-node7 ~]# ceph fs snap-schedule add /volumes/_nogroup/subvol_2/fa3a56c5-cbd4-455f-87e0-d189b336e09a 4h 2022-10-09T14:00:00 cephfs subvol_2 Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1448, in _handle_command return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 414, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/snap_schedule/module.py", line 136, in snap_schedule_add abs_path = self.resolve_subvolume_path(use_fs, subvol, path) File "/usr/share/ceph/mgr/snap_schedule/module.py", line 44, in resolve_subvolume_path fs, subvol) File "/usr/share/ceph/mgr/mgr_module.py", line 1860, in remote args, kwargs) ImportError: Module not found [root@ceph-amk-doc-bz-9o0073-node7 ~]# ceph fs subvolume info cephfs subvol_2 { "atime": "2022-11-03 17:04:18", "bytes_pcent": "undefined", "bytes_quota": "infinite", "bytes_used": 0, "created_at": "2022-11-03 17:04:18", "ctime": "2022-11-03 17:04:18", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "gid": 0, "mode": 16877, "mon_addrs": [ "10.0.208.229:6789", "10.0.209.83:6789", "10.0.208.20:6789" ], "mtime": "2022-11-03 17:04:18", "path": "/volumes/_nogroup/subvol_2/fa3a56c5-cbd4-455f-87e0-d189b336e09a", "pool_namespace": "", "state": "complete", "type": "subvolume", "uid": 0 } [root@ceph-amk-doc-bz-9o0073-node7 ~]# ceph versions { "mon": { "ceph version 16.2.10-69.el8cp (cc75d04053c4340264a41219391e4808a89a8a4d) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.10-69.el8cp (cc75d04053c4340264a41219391e4808a89a8a4d) pacific (stable)": 2 }, "osd": { "ceph version 16.2.10-69.el8cp (cc75d04053c4340264a41219391e4808a89a8a4d) pacific (stable)": 12 }, "mds": { "ceph version 16.2.10-69.el8cp (cc75d04053c4340264a41219391e4808a89a8a4d) pacific (stable)": 2 }, "overall": { "ceph version 16.2.10-69.el8cp (cc75d04053c4340264a41219391e4808a89a8a4d) pacific (stable)": 19 } } Actual results: Expected results: Additional info: