Bug 1744283 - bucket sync status reports complete but bilog keys left untrimmed
Summary: bucket sync status reports complete but bilog keys left untrimmed
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 4.*
Assignee: Casey Bodley
QA Contact: Tejas
URL:
Whiteboard:
: 1736797 (view as bug list)
Depends On:
Blocks: 1727980
TreeView+ depends on / blocked
 
Reported: 2019-08-21 18:10 UTC by Tim Wilkinson
Modified: 2022-04-26 11:20 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-26 11:20:43 UTC
Embargoed:


Attachments (Terms of Use)

Description Tim Wilkinson 2019-08-21 18:10:34 UTC
Description of problem:
----------------------
Individual bucket sync status is reporting completely synced but there are 2.3 million bilog keys generating large omap object warnings.



Component Version-Release:
-------------------------
7.6 (Maipo)   3.10.0-957.el7.x86_64
ceph-base.x86_64   2:12.2.8-128.el7cp



How reproducible:
----------------
unknown



Steps to Reproduce:
------------------
1. configure multisite on two clusters using ansible playbook
2. populate master site with data, observe sync activity
3. check bucket sync status and bilog details



Actual results:
--------------
bucket sync status reports complete but bilogs aren't trimmed accordingly



Expected results:
----------------
bucket sync status reports complete and bilogs are trimmed



Additional info:
---------------
# radosgw-admin bucket sync status --bucket=mycontainers2
          realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
      zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
           zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
         bucket mycontainers2[7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1]

    source zone 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
                full sync: 0/1 shards
                incremental sync: 1/1 shards
                bucket is caught up with source



# radosgw-admin bucket stats
   ...
    {
        "bucket": "mycontainers2",
        "zonegroup": "053ddd45-7321-4a3f-b9b7-253edc830725",
        "placement_rule": "default-placement",
        "explicit_placement": {
            "data_pool": "",
            "data_extra_pool": "",
            "index_pool": ""
        },
        "id": "7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1",
        "marker": "7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1",
        "index_type": "Normal",
        "owner": "johndoe",
        "ver": "0#2365894",
        "master_ver": "0#0",
        "mtime": "2019-08-13 23:47:21.536665",
        "max_marker": "0#00002365893.2366059.5",
        "usage": {
            "rgw.main": {
                "size": 11271073160000,
                "size_actual": 11271454883840,
                "size_utilized": 11271073160000,
                "size_kb": 11006907383,
                "size_kb_actual": 11007280160,
                "size_kb_utilized": 11006907383,
                "num_objects": 210972
            }
        },
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": true,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        }
    },

    ...



# rados -p site1.rgw.buckets.index listomapkeys .dir.7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1 | wc -l
2346867




From the log ...
272:77fe05db:::.dir.7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2907365.1:head Key count: 2342160 Size (bytes): 448597889

Comment 13 Vikhyat Umrao 2019-09-05 21:33:55 UTC
*** Bug 1736797 has been marked as a duplicate of this bug. ***

Comment 15 Yaniv Kaul 2020-02-19 17:29:51 UTC
Casey - what's the latest status here?


Note You need to log in before you can comment on or make changes to this bug.