Bug 1744283

Summary: bucket sync status reports complete but bilog keys left untrimmed
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tim Wilkinson <twilkins>
Component: RGW-MultisiteAssignee: Casey Bodley <cbodley>
Status: CLOSED CURRENTRELEASE QA Contact: Tejas <tchandra>
Severity: high Docs Contact:
Priority: medium    
Version: 3.2CC: assingh, cbodley, ceph-eng-bugs, ceph-qe-bugs, jharriga, mbenjamin, racpatel, vumrao
Target Milestone: ---   
Target Release: 4.*   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-26 11:20:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1727980    

Description Tim Wilkinson 2019-08-21 18:10:34 UTC
Description of problem:
----------------------
Individual bucket sync status is reporting completely synced but there are 2.3 million bilog keys generating large omap object warnings.



Component Version-Release:
-------------------------
7.6 (Maipo)   3.10.0-957.el7.x86_64
ceph-base.x86_64   2:12.2.8-128.el7cp



How reproducible:
----------------
unknown



Steps to Reproduce:
------------------
1. configure multisite on two clusters using ansible playbook
2. populate master site with data, observe sync activity
3. check bucket sync status and bilog details



Actual results:
--------------
bucket sync status reports complete but bilogs aren't trimmed accordingly



Expected results:
----------------
bucket sync status reports complete and bilogs are trimmed



Additional info:
---------------
# radosgw-admin bucket sync status --bucket=mycontainers2
          realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
      zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
           zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
         bucket mycontainers2[7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1]

    source zone 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
                full sync: 0/1 shards
                incremental sync: 1/1 shards
                bucket is caught up with source



# radosgw-admin bucket stats
   ...
    {
        "bucket": "mycontainers2",
        "zonegroup": "053ddd45-7321-4a3f-b9b7-253edc830725",
        "placement_rule": "default-placement",
        "explicit_placement": {
            "data_pool": "",
            "data_extra_pool": "",
            "index_pool": ""
        },
        "id": "7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1",
        "marker": "7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1",
        "index_type": "Normal",
        "owner": "johndoe",
        "ver": "0#2365894",
        "master_ver": "0#0",
        "mtime": "2019-08-13 23:47:21.536665",
        "max_marker": "0#00002365893.2366059.5",
        "usage": {
            "rgw.main": {
                "size": 11271073160000,
                "size_actual": 11271454883840,
                "size_utilized": 11271073160000,
                "size_kb": 11006907383,
                "size_kb_actual": 11007280160,
                "size_kb_utilized": 11006907383,
                "num_objects": 210972
            }
        },
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": true,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        }
    },

    ...



# rados -p site1.rgw.buckets.index listomapkeys .dir.7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2887729.1 | wc -l
2346867




From the log ...
272:77fe05db:::.dir.7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2907365.1:head Key count: 2342160 Size (bytes): 448597889

Comment 13 Vikhyat Umrao 2019-09-05 21:33:55 UTC
*** Bug 1736797 has been marked as a duplicate of this bug. ***

Comment 15 Yaniv Kaul 2020-02-19 17:29:51 UTC
Casey - what's the latest status here?