Bug 2120498

Summary: mgr/snap_schedule assumes that the client snap dir is always ".snap"
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Milind Changire <mchangir>
Component: CephFSAssignee: Milind Changire <mchangir>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: low Docs Contact:
Priority: unspecified    
Version: 5.3CC: ceph-eng-bugs, cephqe-warriors, hyelloji, tserlin, vshankar
Target Milestone: ---   
Target Release: 5.3z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.10-99.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-02-28 10:05:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Milind Changire 2022-08-23 06:01:43 UTC
Description of problem:
snap_schedule assumes that the client snap dir is always ".snap"
the module functionality will break when the client snap dir is changed to something else

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2022-08-23 06:01:53 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 9 Amarnath 2023-01-26 17:57:46 UTC
Verified on version 16.2.10-106.el8cp

Steps Followed:

1. changed the client_snapdir configuration to test
2. Created snap Schedule.
3. Verified if it is getting created in the specific folder i.e, test inside the client

[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph fs snap-schedule add / 1h
Error ENOTSUP: Module 'snap_schedule' is not enabled (required by command 'fs snap-schedule add'): use `ceph mgr module enable snap_schedule` to enable it
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph mgr module enable snap_schedule
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph config get client client_snapdir
test
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph fs snap-schedule retention add / h 14
Error ENOENT: No schedule found for /
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph fs snap-schedule add / 1h
Schedule set for path /
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph fs snap-schedule retention add / h 14
Retention added to path /
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph fs snap-schedule activate /
Schedule activated for path /
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# ceph fs snap-schedule status /
{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 14}, "start": "2023-01-25T00:00:00", "created": "2023-01-25T21:33:47", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# date
Wed Jan 25 16:34:18 EST 2023
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# 



[root@ceph-amk-bz-sd0nig-node7 ~]# ceph fs snap-schedule status /
{"fs": "cephfs", "subvol": null, "path": "/", "rel_path": "/", "schedule": "1h", "retention": {"h": 14}, "start": "2023-01-25T00:00:00", "created": "2023-01-25T21:33:47", "first": "2023-01-25T22:00:00", "last": "2023-01-26T05:00:00", "last_pruned": null, "created_count": 8, "pruned_count": 0, "active": true}
[root@ceph-amk-bz-sd0nig-node7 ~]# 
[root@ceph-amk-bz-sd0nig-node7 ~]# cd /mnt/
[root@ceph-amk-bz-sd0nig-node7 mnt]# ls -lrt
total 3
drwxr-xr-x. 3 root root         381 Jan 23 22:51 fuse_root
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 cephfs_fuse_2
drwxr-xr-x. 2 root root          94 Jan 23 22:55 cephfs_fuseuse
drwxr-xr-x. 5 root root 16515727403 Jan 25 14:53 cephfs_fuse_1
drwxr-xr-x. 2 root root          14 Jan 25 16:27 cephfs_fuse_3
[root@ceph-amk-bz-sd0nig-node7 mnt]# cd 
[root@ceph-amk-bz-sd0nig-node7 ~]# cd cephfs_fuse_2
-bash: cd: cephfs_fuse_2: No such file or directory

[root@ceph-amk-bz-sd0nig-node7 ~]# cd /mnt/cephfs_fuse_2
[root@ceph-amk-bz-sd0nig-node7 cephfs_fuse_2]# cd test
[root@ceph-amk-bz-sd0nig-node7 test]# ls -lrt
total 4
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-26-05_00_00_UTC
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-26-04_00_00_UTC
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-26-03_00_00_UTC
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-26-02_00_00_UTC
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-26-01_00_00_UTC
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-26-00_00_00_UTC
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-25-23_00_00_UTC
drwxr-xr-x. 3 root root 16515727907 Jan 23 22:51 scheduled-2023-01-25-22_00_00_UTC
[root@ceph-amk-bz-sd0nig-node7 test]# 

---------------------------------------------------------------------------
Cluster and version details
[root@ceph-amk-bz-sd0nig-node7 ~]# ceph -s
  cluster:
    id:     60b3639c-9b52-11ed-a00f-fa163e6c4525
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-amk-bz-sd0nig-node1-installer,ceph-amk-bz-sd0nig-node2,ceph-amk-bz-sd0nig-node3 (age 2d)
    mgr: ceph-amk-bz-sd0nig-node1-installer.mzmdkg(active, since 20h), standbys: ceph-amk-bz-sd0nig-node2.qoktvk
    mds: 2/2 daemons up, 1 standby
    osd: 12 osds: 12 up (since 2d), 12 in (since 2d)
 
  data:
    volumes: 1/1 healthy
    pools:   3 pools, 161 pgs
    objects: 4.90k objects, 15 GiB
    usage:   47 GiB used, 133 GiB / 180 GiB avail
    pgs:     161 active+clean
 
[root@ceph-amk-bz-sd0nig-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 16.2.10-106.el8cp (5c3bbcd85078524876d1fc91ec4e3d80d83a28f4) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.10-106.el8cp (5c3bbcd85078524876d1fc91ec4e3d80d83a28f4) pacific (stable)": 2
    },
    "osd": {
        "ceph version 16.2.10-106.el8cp (5c3bbcd85078524876d1fc91ec4e3d80d83a28f4) pacific (stable)": 12
    },
    "mds": {
        "ceph version 16.2.10-106.el8cp (5c3bbcd85078524876d1fc91ec4e3d80d83a28f4) pacific (stable)": 3
    },
    "overall": {
        "ceph version 16.2.10-106.el8cp (5c3bbcd85078524876d1fc91ec4e3d80d83a28f4) pacific (stable)": 20
    }
}
[root@ceph-amk-bz-sd0nig-node7 ~]# 


Regards,
Amaranth

Comment 10 errata-xmlrpc 2023-02-28 10:05:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980