Description of problem: There is an issue where bilogs are left in the sync-disabled buckets in the multi-site sync-enabled cluster. In our cluster, these logs were accumulated for a long time and caused a performance problem. This phenomenon can be reproduced through the following procedure: First off, create multi-site cluster(c1,c2) using rgw/test-rgw-multisite.sh script. And then execute the below commands. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: $ yum install s3cmd -y $ cat >> c1-s3cmd << EOF [default] access_key = 1234567890 secret_key = pencil host_base = 127.0.0.1:8001 host_bucket = 127.0.0.1:8001 use_https = False EOF $ s3cmd -c c1-s3cmd mb s3://mytest1 $ ../src/mrun c1 radosgw-admin bucket sync disable --bucket mytest1 $ s3cmd -c c1-s3cmd put obj s3://mytest1 $ ../src/mrun c1 radosgw-admin bilog list --bucket mytest1 | grep -A1 write (Nothing left) $ s3cmd -c c1-s3cmd mb s3://mytest2 $ ../src/mrun c1 radosgw-admin bucket sync disable --bucket mytest2 $ ../src/mrun c1 radosgw-admin bucket check --fix --bucket mytest2 $ s3cmd -c c1-s3cmd put obj s3://mytest2 $ ../src/mrun c1 radosgw-admin bilog list --bucket mytest2 | grep -A1 write "op": "write", "object": "obj", (a bilog is left in the sync-disabled bucket) Actual results: Expected results: Additional info:
Note: this comes from upstream tracker: https://tracker.ceph.com/issues/48037
Casey, Would you mind handling the Doc Text on this one since you understand the implications. Thanks, Eric
Hi Eric and Casey, Could you please set the "Doc Type" field and fill out the "Doc Text" template with the relevant information. This is for inclusion in the RHCS 4.2 Release Notes. Thanks Amrita
(In reply to Amrita from comment #7) > Hi Eric and Casey, > > Could you please set the "Doc Type" field and fill out the "Doc Text" > template with the relevant information. This is for inclusion in the RHCS > 4.2 Release Notes. > > Thanks > Amrita I did my best with the Doc Text since Casey is on PTO this week. Eric
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0081
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days