Bug 2268560 - [CephFS - Snap Scheduler UI] - Retention policy for monthly interval added in CLI is seen as undefined in Dashboard
Summary: [CephFS - Snap Scheduler UI] - Retention policy for monthly interval added in...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Dashboard
Version: 7.1
Hardware: x86_64
OS: All
unspecified
high
Target Milestone: ---
: 7.1
Assignee: Ivo Almeida
QA Contact: sumr
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2267614 2298578 2298579
TreeView+ depends on / blocked
 
Reported: 2024-03-08 09:55 UTC by sumr
Modified: 2024-07-18 07:59 UTC (History)
10 users (show)

Fixed In Version: ceph-18.2.1-78.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-06-13 14:29:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-8485 0 None None None 2024-03-11 12:16:22 UTC
Red Hat Issue Tracker RHCSDASH-1302 0 None None None 2024-03-11 12:16:25 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:29:10 UTC

Description sumr 2024-03-08 09:55:11 UTC
Created attachment 2020552 [details]
snap_sched_ui_func_cli_to_ui_M_undefined

Description of problem:
Retention policy set for Monthly interval in CLI is seen as '3 undefined' in Dashboard.

Version-Release number of selected component (if applicable): ceph version 18.2.1-50.el9cp 


How reproducible:


Steps to Reproduce:
Enable minutely schedule in CLI.
1. In CLI, Create 2 subvolumes in default group and 2 in non-default group
2. In CLI, Create snapshot schedule on all subvolumes as below,

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule add / 1m --subvol sv_cli_1 --fs cephfs
Schedule set for path /volumes/_nogroup/sv_cli_1/dc380127-54a5-4e23-a6d4-0b2c32a78a06/..

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule add / 2m --subvol sv_cli_non_def_1 --fs cephfs --group svg1
Schedule set for path /volumes/svg1/sv_cli_non_def_1/b54df0bb-0709-4533-a645-360c016ed959/..

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule add /volumes/svg1/sv_cli_non_def_2/d80f0592-a0e6-4bf2-b998-13de48111506/.. 1h  --fs cephfs
Schedule set for path /volumes/svg1/sv_cli_non_def_2/d80f0592-a0e6-4bf2-b998-13de48111506/..

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule add /volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/.. 2h  --fs cephfs
Schedule set for path /volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/..

3. Add Retention policy on all subvolumes as below,

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule retention add /volumes/_nogroup/sv_cli_1/dc380127-54a5-4e23-a6d4-0b2c32a78a06/.. 5n --fs cephfs
Retention added to path /volumes/_nogroup/sv_cli_1/dc380127-54a5-4e23-a6d4-0b2c32a78a06/..

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule retention add /volumes/svg1/sv_cli_non_def_1/b54df0bb-0709-4533-a645-360c016ed959/.. 5m --fs cephfs
Retention added to path /volumes/svg1/sv_cli_non_def_1/b54df0bb-0709-4533-a645-360c016ed959/..

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule retention add /volumes/svg1/sv_cli_non_def_2/d80f0592-a0e6-4bf2-b998-13de48111506/.. 6h --fs cephfs
Retention added to path /volumes/svg1/sv_cli_non_def_2/d80f0592-a0e6-4bf2-b998-13de48111506/..

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule retention add /volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/.. 3M --fs cephfs
Retention added to path /volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/..

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule retention add /volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/.. 10h --fs cephfs
Retention added to path /volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/..

4. Verify status of all snap-schedules in CLI and compare with UI

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule status /volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/.. --fs cephfs
{"fs": "cephfs", "subvol": null, "group": null, "path": "/volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/..", "rel_path": "/volumes/_nogroup/sv_cli_2/25ff6da4-d47f-4b5f-ac9b-ba65b64fd21f/..", "schedule": "2h", "retention": {"M": 3, "h": 10}, "start": "2024-03-08T00:00:00", "created": "2024-03-08T08:45:46", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule status /volumes/svg1/sv_cli_non_def_2/d80f0592-a0e6-4bf2-b998-13de48111506/.. --fs cephfs
{"fs": "cephfs", "subvol": null, "group": null, "path": "/volumes/svg1/sv_cli_non_def_2/d80f0592-a0e6-4bf2-b998-13de48111506/..", "rel_path": "/volumes/svg1/sv_cli_non_def_2/d80f0592-a0e6-4bf2-b998-13de48111506/..", "schedule": "1h", "retention": {"h": 6}, "start": "2024-03-08T00:00:00", "created": "2024-03-08T08:44:19", "first": "2024-03-08T09:00:01", "last": "2024-03-08T09:00:01", "last_pruned": null, "created_count": 1, "pruned_count": 0, "active": true}

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule status /volumes/svg1/sv_cli_non_def_1/b54df0bb-0709-4533-a645-360c016ed959/.. --fs cephfs
{"fs": "cephfs", "subvol": "sv_cli_non_def_1", "group": "svg1", "path": "/volumes/svg1/sv_cli_non_def_1/b54df0bb-0709-4533-a645-360c016ed959/..", "rel_path": "/volumes/svg1/sv_cli_non_def_1/b54df0bb-0709-4533-a645-360c016ed959/..", "schedule": "2m", "retention": {"m": 5}, "start": "2024-03-08T00:00:00", "created": "2024-03-08T08:40:44", "first": "2024-03-08T08:42:00", "last": "2024-03-08T09:10:01", "last_pruned": "2024-03-08T09:10:01", "created_count": 15, "pruned_count": 10, "active": true}

[root@ceph-sumar-cg-test-zri789-node7 ~]# ceph fs snap-schedule status /volumes/_nogroup/sv_cli_1/dc380127-54a5-4e23-a6d4-0b2c32a78a06/.. --fs cephfs
{"fs": "cephfs", "subvol": "sv_cli_1", "group": null, "path": "/volumes/_nogroup/sv_cli_1/dc380127-54a5-4e23-a6d4-0b2c32a78a06/..", "rel_path": "/volumes/_nogroup/sv_cli_1/dc380127-54a5-4e23-a6d4-0b2c32a78a06/..", "schedule": "1m", "retention": {"n": 5}, "start": "2024-03-08T00:00:00", "created": "2024-03-08T08:39:25", "first": "2024-03-08T08:40:00", "last": "2024-03-08T09:15:00", "last_pruned": "2024-03-08T09:15:00", "created_count": 36, "pruned_count": 31, "active": true}
[root@ceph-sumar-cg-test-zri789-node7 ~]# 

Actual results: Retention policy values in Dashboard for Monthly intervals is seen as undefined.


Expected results: Retention policy values for Monthly intervals is expected in Dashboard to be seen as Monthly


Additional info: Screenshots are attached.

Comment 14 errata-xmlrpc 2024-06-13 14:29:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925


Note You need to log in before you can comment on or make changes to this bug.