Bug 2188175

Summary: [CephFs]snap-schedule status is failing with NameError
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Amarnath <amk>
Component: CephFSAssignee: Venky Shankar <vshankar>
Status: CLOSED DUPLICATE QA Contact: Hemanth Kumar <hyelloji>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.1CC: ceph-eng-bugs, cephqe-warriors, vereddy
Target Milestone: ---Keywords: Regression
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-04-20 02:46:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amarnath 2023-04-20 01:57:33 UTC
Description of problem:
[CephFs]snap-schedule status is failing with NameError  

Steps Followed:
1. Created snap-schedule using below commands
[root@ceph-amk-test-dcg363-node7 ~]# ceph config set mgr mgr/snap_schedule/allow_m_granularity true

[root@ceph-amk-test-dcg363-node7 ~]# ceph mgr module enable snap_schedule
[root@ceph-amk-test-dcg363-node7 ~]# ceph fs snap-schedule add / 1m
Error ENOENT: schedule multiplier "m" not recognized
[root@ceph-amk-test-dcg363-node7 ~]# ceph fs snap-schedule add / 1M
Schedule set for path /

[root@ceph-amk-test-dcg363-node7 ~]# ceph fs snap-schedule add / 1h
Schedule set for path /
[root@ceph-amk-test-dcg363-node7 ~]# ceph config set mgr mgr/snap_schedule/allow_m_granularity true
[root@ceph-amk-test-dcg363-node7 ~]# ceph fs snap-schedule add / 1m
Error ENOENT: schedule multiplier "m" not recognized
[root@ceph-amk-test-dcg363-node7 ~]# 
[root@ceph-amk-test-dcg363-node7 ~]# 
[root@ceph-amk-test-dcg363-node7 ~]# ceph fs snap-schedule retention add / h 14
Retention added to path /
[root@ceph-amk-test-dcg363-node7 ~]# ceph fs snap-schedule activate /
Schedule activated for path /
[root@ceph-amk-test-dcg363-node7 ~]# ceph fs snap-schedule status /
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1758, in _handle_command
    return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 462, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/snap_schedule/module.py", line 83, in snap_schedule_get
    abs_path = self.resolve_subvolume_path(use_fs, subvol, path)
NameError: name 'subvol' is not defined

When tried to get the status we are seeing NameError

Older builds same command used to return
 date ;ceph fs snap-schedule status
Fri Sep  9 13:00:06 EDT 2022
{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 14}, "start": "2022-09-08T00:00:00", "created": "2022-09-08T17:11:01", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}
Version-Release number of selected component (if applicable):
[root@ceph-amk-test-dcg363-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-23.el9cp (c6d4a25b629af2f2149b9e376e0c039f0e663bf3) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-23.el9cp (c6d4a25b629af2f2149b9e376e0c039f0e663bf3) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-23.el9cp (c6d4a25b629af2f2149b9e376e0c039f0e663bf3) quincy (stable)": 16
    },
    "mds": {
        "ceph version 17.2.6-23.el9cp (c6d4a25b629af2f2149b9e376e0c039f0e663bf3) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.6-23.el9cp (c6d4a25b629af2f2149b9e376e0c039f0e663bf3) quincy (stable)": 24
    }
}


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2023-04-20 01:57:42 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Venky Shankar 2023-04-20 02:46:46 UTC

*** This bug has been marked as a duplicate of bug 2187659 ***