Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1744276

Summary: sync status reports complete while bucket status does not
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tim Wilkinson <twilkins>
Component: RGW-MultisiteAssignee: Casey Bodley <cbodley>
Status: CLOSED ERRATA QA Contact: Uday kurundwade <ukurundw>
Severity: high Docs Contact:
Priority: medium    
Version: 3.2CC: assingh, cbodley, ceph-eng-bugs, ceph-qe-bugs, jharriga, mbenjamin, tserlin, ukurundw, vumrao
Target Milestone: rcKeywords: Regression
Target Release: 4.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-14.2.4-96.el8cp, ceph-14.2.4-36.el7cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1746127 (view as bug list) Environment:
Last Closed: 2020-05-19 17:30:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1727980, 1746127    

Description Tim Wilkinson 2019-08-21 17:51:52 UTC
Description of problem:
----------------------
The secondary site of this ceph 3.2z2 multisite env is reporting its sync status complete (metadata & data caught up with source) but two of the five individual buckets report being behind on shards.



Component Version-Release:
-------------------------
7.6 (Maipo)   3.10.0-957.el7.x86_64
ceph-base.x86_64   2:12.2.8-128.el7cp



How reproducible:
----------------
unknown



Steps to Reproduce:
------------------
1. configure multisite on two clusters using ansible playbook
2. populate master site with data, observe sync activity
3. check sync status and bucket sync status



Actual results:
--------------
sync status complete but individual bucket status behind on shards



Expected results:
----------------
sync status and individual buckets status report complete



Additional info:
---------------
# radosgw-admin sync status
          realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
      zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
           zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source



# radosgw-admin bucket sync status --bucket=mycontainers1
          realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
      zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
           zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
         bucket mycontainers1[7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2907365.1]

    source zone 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
                full sync: 0/1 shards
                incremental sync: 1/1 shards
                bucket is behind on 1 shards
                behind shards: [0]

Comment 1 RHEL Program Management 2019-08-21 17:51:58 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 44 errata-xmlrpc 2020-05-19 17:30:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:2231

Comment 45 Red Hat Bugzilla 2023-09-14 05:42:06 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days