Description of problem: When running the rgw-orphan-list script, if the current UTC time has a single digit hour (0-9 but not 10-23), then there will be a space in the timestamp. The timestamp is incorporated into intermediate and output filenames. In some cases in the script the filename isn't quoted, so the space will create an issue. Additionally it's more difficult for end-users to deal with spaces in filenames. Version-Release number of selected component (if applicable): How reproducible: Very Steps to Reproduce: 1. run rgw-orphan-list when the UTC hour is 0-9. Actual results: Script produces error. Expected results: Script should run without producing error. Additional info:
two commits cherry-picked to ceph-5.0-rhel-patches
Verified on 5.0: ```[cephuser@ceph-s3cmd-1622612004024-node5-osd-rgw ~]$ rgw-orphan-list Available pools: device_health_metrics .rgw.root default.rgw.log default.rgw.control default.rgw.meta default.rgw.buckets.index default.rgw.buckets.data Which pool do you want to search for orphans? default.rgw.buckets.index Pool is "default.rgw.buckets.index". Note: output files produced will be tagged with the current timestamp -- 20210604051544. running 'rados ls' at Fri Jun 4 01:15:53 EDT 2021 running 'radosgw-admin bucket radoslist' at Fri Jun 4 01:15:53 EDT 2021 computing delta at Fri Jun 4 01:15:54 EDT 2021 341 potential orphans found out of a possible 341 (100%). The results can be found in './orphan-list-20210604051544.out'. Intermediate files are './rados-20210604051544.intermediate' and './radosgw-admin-20210604051544.intermediate'. *** *** WARNING: This is EXPERIMENTAL code and the results should be used *** only with CAUTION! *** Done at Fri Jun 4 01:15:54 EDT 2021.``` Ceph version details: ```[cephuser@ceph-s3cmd-1622612004024-node5-osd-rgw ~]$ ceph --version ceph version 16.2.0-46.el8cp (66a64d4057b4d63ac87706a71f1a92d88d700515) pacific (stable) ```
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294