Bug 2117166

Summary: [rbd] : trash purge schedule subcommands are not working
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: RBDAssignee: Ilya Dryomov <idryomov>
Status: CLOSED ERRATA QA Contact: Preethi <pnataraj>
Severity: high Docs Contact: Masauso Lungu <mlungu>
Priority: unspecified    
Version: 6.0CC: ceph-eng-bugs, cephqe-warriors, idryomov, mlungu, vdas
Target Milestone: ---   
Target Release: 6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.3-5.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-20 18:57:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vasishta 2022-08-10 08:10:44 UTC
Description of problem:
subcommands of `rbd trash purge schedule` are not working.

Any subcommand usage returns too many arguments

Version-Release number of selected component (if applicable):
ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)

How reproducible:
Tried thrice

Steps to Reproduce:
1. Configure the cluster and try rbd trash purge schedule commands.

Actual results:
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule remove
rbd: too many arguments
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule add   
rbd: too many arguments
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule list
rbd: too many arguments
[ceph: root@ceph-vasishta-c9hkq7-node1-installer /]# rbd trash purge schedule status
rbd: too many arguments


Expected results:
purge schedule commands to work as before.

Additional info:
From a pacific (5.2 cluster)

[ceph: root@magna031 /]#  rbd trash purge schedule list
[ceph: root@magna031 /]#  rbd trash purge schedule add 1
[ceph: root@magna031 /]#  rbd trash purge schedule list
every 1m
[ceph: root@magna031 /]# rbd trash purge schedule status
POOL         NAMESPACE  SCHEDULE TIME      
check                   2022-08-10 08:08:00
test_mirror             2022-08-10 08:08:00


FYI, all help messages return the same 
# rbd help trash purge schedule status
usage: rbd trash purge [--pool <pool>] [--namespace <namespace>] 
                       [--no-progress] [--expired-before <expired-before>] 
                       [--threshold <threshold>] 
                       <pool-spec> 

Remove all expired images from trash.

Positional arguments
  <pool-spec>           pool specification
                        (example: <pool-name>[/<namespace>]

Optional arguments
  -p [ --pool ] arg     pool name
  --namespace arg       namespace name
  --no-progress         disable progress output
  --expired-before date purges images that expired before the given date
  --threshold arg       purges images until the current pool data usage is
                        reduced to X%, value range: 0.0-1.0

Comment 6 Preethi 2022-08-11 11:23:53 UTC
@IIya, I see the issue in my newly deployed 6.0 cluster 

[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 11
    },
    "mds": {},
    "overall": {
        "ceph version 17.2.3-1.el9cp (774dab48f3974a6b8f48fd848f191864650ea763) quincy (stable)": 16
    }
}
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule remove
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule add
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule list
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]# rbd trash purge schedule status
rbd: too many arguments
[ceph: root@ceph-pnataraj-lnkd9w-node1-installer /]#

Comment 12 Preethi 2022-08-18 14:17:46 UTC
This is working fine with latest build of RHCS 6.0. Below snippet and build version.

[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule add --pool test 10m
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule ls --pool test 
every 10m

[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd ls test
image1
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule status
POOL  NAMESPACE  SCHEDULE TIME      
test             2022-08-18 06:10:00
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# rbd trash purge schedule remove --pool test 10m

[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]#  rbd trash purge schedule ls --pool test
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 5
    },
    "mgr": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 12
    },
    "mds": {},
    "overall": {
        "ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)": 19
    }
}

Comment 20 errata-xmlrpc 2023-03-20 18:57:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360