Bug 2117166 - [rbd] : trash purge schedule subcommands are not working
Summary: [rbd] : trash purge schedule subcommands are not working
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 6.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 6.0
Assignee: Ilya Dryomov
QA Contact: Preethi
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-08-10 08:10 UTC by Vasishta
Modified: 2023-03-20 18:58 UTC (History)
5 users (show)

Fixed In Version: ceph-17.2.3-5.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-03-20 18:57:23 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 57107 0 None None None 2022-08-11 19:44:57 UTC
Red Hat Issue Tracker RHCEPH-5038 0 None None None 2022-08-10 08:20:43 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:58:04 UTC

Description Vasishta 2022-08-10 08:10:44 UTC
Description of problem:
subcommands of `rbd trash purge schedule` are not working.

Any subcommand usage returns too many arguments

Version-Release number of selected component (if applicable):
ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)

How reproducible:
Tried thrice

Steps to Reproduce:
1. Configure the cluster and try rbd trash purge schedule commands.

Actual results:
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule remove
rbd: too many arguments
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule add   
rbd: too many arguments
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule list
rbd: too many arguments
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule status
rbd: too many arguments


Expected results:
purge schedule commands to work as before.

Additional info:
From a pacific (5.2 cluster)

[ceph: root@magna031 /]#  rbd trash purge schedule list
[ceph: root@magna031 /]#  rbd trash purge schedule add 1
[ceph: root@magna031 /]#  rbd trash purge schedule list
every 1m
[ceph: root@magna031 /]# rbd trash purge schedule status
POOL         NAMESPACE  SCHEDULE TIME      
check                   2022-08-10 08:08:00
test_mirror             2022-08-10 08:08:00


FYI, all help messages return the same 
# rbd help trash purge schedule status
usage: rbd trash purge [--pool <pool>] [--namespace <namespace>] 
                       [--no-progress] [--expired-before <expired-before>] 
                       [--threshold <threshold>] 
                       <pool-spec> 

Remove all expired images from trash.

Positional arguments
  <pool-spec>           pool specification
                        (example: <pool-name>[/<namespace>]

Optional arguments
  -p [ --pool ] arg     pool name
  --namespace arg       namespace name
  --no-progress         disable progress output
  --expired-before date purges images that expired before the given date
  --threshold arg       purges images until the current pool data usage is
                        reduced to X%, value range: 0.0-1.0

Comment 6 Preethi 2022-08-11 11:23:53 UTC
@IIya, I see the issue in my newly deployed 6.0 cluster 

[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 11
    },
    "mds": {},
    "overall": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 16
    }
}
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule remove
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule add
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule list
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule status
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]#

Comment 12 Preethi 2022-08-18 14:17:46 UTC
This is working fine with latest build of RHCS 6.0. Below snippet and build version.

[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule add --pool test 10m
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule ls --pool test 
every 10m

[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd ls test
image1
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule status
POOL  NAMESPACE  SCHEDULE TIME      
test             2022-08-18 06:10:00
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule remove --pool test 10m

[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]#  rbd trash purge schedule ls --pool test
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 5
    },
    "mgr": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 12
    },
    "mds": {},
    "overall": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 19
    }
}

Comment 20 errata-xmlrpc 2023-03-20 18:57:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.