Bug 1894702 - Unnecessary bilogs are left in sync-disabled buckets
Summary: Unnecessary bilogs are left in sync-disabled buckets
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.2
Assignee: J. Eric Ivancich
QA Contact: Vidushi Mishra
Amrita
URL:
Whiteboard:
Depends On:
Blocks: 1890121
TreeView+ depends on / blocked
 
Reported: 2020-11-04 20:36 UTC by J. Eric Ivancich
Modified: 2023-09-15 00:50 UTC (History)
6 users (show)

Fixed In Version: ceph-14.2.11-72.el8cp, ceph-14.2.11-72.el7cp
Doc Type: Bug Fix
Doc Text:
.Bucket index do not collect entries after syncing on a bucket has been disabled Previously, the use of `radosgw-admin bucket check --fix ...` variable on a bucket in which multi-site syncing has been disabled, would set an incorrect flag indicating syncing has not been disabled. Data would be added to bucket index logs that would not be used or trimmed, thereby consuming more storage over time. With this release, the syncing flag is now copied correctly when running the `radosgw-admin bucket check --fix ...` command. Bucket index logs do not collect entries after syncing on a bucket is disabled.
Clone Of:
Environment:
Last Closed: 2021-01-12 14:58:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:0081 0 None None None 2021-01-12 14:58:33 UTC

Description J. Eric Ivancich 2020-11-04 20:36:17 UTC
Description of problem:

There is an issue where bilogs are left in the sync-disabled buckets in the multi-site sync-enabled cluster. In our cluster, these logs were accumulated for a long time and caused a performance problem.

This phenomenon can be reproduced through the following procedure:
First off, create multi-site cluster(c1,c2) using rgw/test-rgw-multisite.sh script. And then execute the below commands.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

$ yum install s3cmd -y
$ cat >> c1-s3cmd << EOF
[default]
access_key = 1234567890
secret_key = pencil
host_base = 127.0.0.1:8001
host_bucket = 127.0.0.1:8001
use_https = False
EOF
$ s3cmd -c c1-s3cmd mb s3://mytest1
$ ../src/mrun c1 radosgw-admin bucket sync disable --bucket mytest1
$ s3cmd  -c c1-s3cmd put obj s3://mytest1
$ ../src/mrun c1 radosgw-admin bilog list --bucket mytest1 | grep -A1 write

    (Nothing left)

$ s3cmd -c c1-s3cmd mb s3://mytest2
$ ../src/mrun c1 radosgw-admin bucket sync disable --bucket mytest2
$ ../src/mrun c1 radosgw-admin bucket check --fix --bucket mytest2
$ s3cmd  -c c1-s3cmd put obj s3://mytest2
$ ../src/mrun c1 radosgw-admin bilog list --bucket mytest2 | grep -A1 write
        "op": "write",
        "object": "obj",

    (a bilog is left in the sync-disabled bucket)

Actual results:


Expected results:


Additional info:

Comment 1 J. Eric Ivancich 2020-11-04 20:38:50 UTC
Note: this comes from upstream tracker: https://tracker.ceph.com/issues/48037

Comment 2 J. Eric Ivancich 2020-11-04 21:09:14 UTC
Casey,

Would you mind handling the Doc Text on this one since you understand the implications.

Thanks,

Eric

Comment 7 Amrita 2020-11-28 18:40:54 UTC
Hi Eric and Casey, 

Could you please set the "Doc Type" field and fill out the "Doc Text" template with the relevant information.  This is for inclusion in the RHCS 4.2 Release Notes.

Thanks
Amrita

Comment 9 J. Eric Ivancich 2020-12-02 15:16:05 UTC
(In reply to Amrita from comment #7)
> Hi Eric and Casey, 
> 
> Could you please set the "Doc Type" field and fill out the "Doc Text"
> template with the relevant information.  This is for inclusion in the RHCS
> 4.2 Release Notes.
> 
> Thanks
> Amrita

I did my best with the Doc Text since Casey is on PTO this week.

Eric

Comment 12 errata-xmlrpc 2021-01-12 14:58:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081

Comment 13 Red Hat Bugzilla 2023-09-15 00:50:42 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.