Bug 1894702

Summary: Unnecessary bilogs are left in sync-disabled buckets
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: J. Eric Ivancich <ivancich>
Component: RGW-MultisiteAssignee: J. Eric Ivancich <ivancich>
Status: CLOSED ERRATA QA Contact: Vidushi Mishra <vimishra>
Severity: medium Docs Contact: Amrita <asakthiv>
Priority: unspecified    
Version: 4.1CC: asakthiv, cbodley, ceph-eng-bugs, ceph-qe-bugs, tchandra, tserlin
Target Milestone: ---   
Target Release: 4.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-14.2.11-72.el8cp, ceph-14.2.11-72.el7cp Doc Type: Bug Fix
Doc Text:
.Bucket index do not collect entries after syncing on a bucket has been disabled Previously, the use of `radosgw-admin bucket check --fix ...` variable on a bucket in which multi-site syncing has been disabled, would set an incorrect flag indicating syncing has not been disabled. Data would be added to bucket index logs that would not be used or trimmed, thereby consuming more storage over time. With this release, the syncing flag is now copied correctly when running the `radosgw-admin bucket check --fix ...` command. Bucket index logs do not collect entries after syncing on a bucket is disabled.
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-01-12 14:58:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1890121    

Description J. Eric Ivancich 2020-11-04 20:36:17 UTC
Description of problem:

There is an issue where bilogs are left in the sync-disabled buckets in the multi-site sync-enabled cluster. In our cluster, these logs were accumulated for a long time and caused a performance problem.

This phenomenon can be reproduced through the following procedure:
First off, create multi-site cluster(c1,c2) using rgw/test-rgw-multisite.sh script. And then execute the below commands.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

$ yum install s3cmd -y
$ cat >> c1-s3cmd << EOF
[default]
access_key = 1234567890
secret_key = pencil
host_base = 127.0.0.1:8001
host_bucket = 127.0.0.1:8001
use_https = False
EOF
$ s3cmd -c c1-s3cmd mb s3://mytest1
$ ../src/mrun c1 radosgw-admin bucket sync disable --bucket mytest1
$ s3cmd  -c c1-s3cmd put obj s3://mytest1
$ ../src/mrun c1 radosgw-admin bilog list --bucket mytest1 | grep -A1 write

    (Nothing left)

$ s3cmd -c c1-s3cmd mb s3://mytest2
$ ../src/mrun c1 radosgw-admin bucket sync disable --bucket mytest2
$ ../src/mrun c1 radosgw-admin bucket check --fix --bucket mytest2
$ s3cmd  -c c1-s3cmd put obj s3://mytest2
$ ../src/mrun c1 radosgw-admin bilog list --bucket mytest2 | grep -A1 write
        "op": "write",
        "object": "obj",

    (a bilog is left in the sync-disabled bucket)

Actual results:


Expected results:


Additional info:

Comment 1 J. Eric Ivancich 2020-11-04 20:38:50 UTC
Note: this comes from upstream tracker: https://tracker.ceph.com/issues/48037

Comment 2 J. Eric Ivancich 2020-11-04 21:09:14 UTC
Casey,

Would you mind handling the Doc Text on this one since you understand the implications.

Thanks,

Eric

Comment 7 Amrita 2020-11-28 18:40:54 UTC
Hi Eric and Casey, 

Could you please set the "Doc Type" field and fill out the "Doc Text" template with the relevant information.  This is for inclusion in the RHCS 4.2 Release Notes.

Thanks
Amrita

Comment 9 J. Eric Ivancich 2020-12-02 15:16:05 UTC
(In reply to Amrita from comment #7)
> Hi Eric and Casey, 
> 
> Could you please set the "Doc Type" field and fill out the "Doc Text"
> template with the relevant information.  This is for inclusion in the RHCS
> 4.2 Release Notes.
> 
> Thanks
> Amrita

I did my best with the Doc Text since Casey is on PTO this week.

Eric

Comment 12 errata-xmlrpc 2021-01-12 14:58:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081

Comment 13 Red Hat Bugzilla 2023-09-15 00:50:42 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days