Description of problem: Looks to be encountering upstream tracker: https://tracker.ceph.com/issues/42358 User's unable to open bucket in s3 browser, counts millions of objects and doesn't show data in the bucket. If attempts to list or count usage space for the bucket by s3cmd it gets an error. Bucket check errors with: radosgw-admin bucket check --bucket dfdc 2020-04-11 15:14:05.932970 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #2 2020-04-11 15:14:05.937586 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #3 2020-04-11 15:14:05.944261 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #2 2020-04-11 15:14:05.948948 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #3 2020-04-11 15:14:05.954184 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #2 2020-04-11 15:14:05.958830 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #3 2020-04-11 15:14:05.963661 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #2 2020-04-11 15:14:05.968396 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #3 2020-04-11 15:14:05.973161 7f9a371bee00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #2 Bucket uses multipart uploads and spoeerates uploads into 15mb chunks on very large (50gb+) objects Version-Release number of selected component (if applicable): RHCS 3.3 Additional info: Is there any workaround for https://tracker.ceph.com/issues/42358? Would cleaning up failed multipart uploads resolve this issue?
Hello, We are also using the same CEPH version. [root@admin01 ~]# ceph version ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable) [root@admin01 ~]# radosgw-admin bucket stats|grep ceph-share-1 "bucket": "ceph-share-1", And getting same error when trying to remove the bucket using Rados-admin command from admin node. [root@admin01 ~]# radosgw-admin bucket rm --bucket=cenudflr/ceph-share-1 2020-09-09 00:14:34.464845 7f001ae7be00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #2 2020-09-09 00:14:34.467942 7f001ae7be00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #3 2020-09-09 00:14:34.470879 7f001ae7be00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #2 2020-09-09 00:14:34.473422 7f001ae7be00 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #3 From S3 client. =================== Invoked as: /usr/local/bin/s3cmd rm -v -d s3://ceph-share-1/ --recursive --force -c /root/.s3cfg-se Problem: <class 'IndexError: list index out of range S3cmd: 2.1.0 python: 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] environment LANG=en_US.UTF-8 Traceback (most recent call last): File "/usr/local/bin/s3cmd", line 3121, in <module> rc = main() File "/usr/local/bin/s3cmd", line 3030, in main rc = cmd_func(args) File "/usr/local/bin/s3cmd", line 659, in cmd_object_del rc = subcmd_batch_del_iterative(uri_str = uri_str) File "/usr/local/bin/s3cmd", line 681, in subcmd_batch_del_iterative for _, _, to_delete in s3.bucket_list_streaming(bucket, prefix=uri.object(), recursive=True): File "/usr/local/lib/python3.6/site-packages/S3/S3.py", line 368, in bucket_list_streaming uri_params['marker'] = current_prefixes[-1]["Prefix"] IndexError: list index out of range /Jay
Hello Jay, This bug BZ#1831740 (https://bugzilla.redhat.com/show_bug.cgi?id=1831740) was fixed in latest Red Hat Ceph Storage 3.3.z6 (12.2.12-124.el7cp). Regards, Frédéric.
Hi Jay, not sure anymore. Same symptoms but may be different one. Give RHCS 3.3z6 a try and see how it goes. Frédéric.
Hi Frederic, Sure, will update here the results. Thank you for your valuable response. Regards, Jay