Description of problem:
----------------------
The secondary site of this ceph 3.2z2 multisite env is reporting its sync status complete (metadata & data caught up with source) but two of the five individual buckets report being behind on shards.
Component Version-Release:
-------------------------
7.6 (Maipo) 3.10.0-957.el7.x86_64
ceph-base.x86_64 2:12.2.8-128.el7cp
How reproducible:
----------------
unknown
Steps to Reproduce:
------------------
1. configure multisite on two clusters using ansible playbook
2. populate master site with data, observe sync activity
3. check sync status and bucket sync status
Actual results:
--------------
sync status complete but individual bucket status behind on shards
Expected results:
----------------
sync status and individual buckets status report complete
Additional info:
---------------
# radosgw-admin sync status
realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
# radosgw-admin bucket sync status --bucket=mycontainers1
realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
bucket mycontainers1[7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2907365.1]
source zone 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
full sync: 0/1 shards
incremental sync: 1/1 shards
bucket is behind on 1 shards
behind shards: [0]
Comment 1RHEL Program Management
2019-08-21 17:51:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2020:2231
Comment 45Red Hat Bugzilla
2023-09-14 05:42:06 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days
Description of problem: ---------------------- The secondary site of this ceph 3.2z2 multisite env is reporting its sync status complete (metadata & data caught up with source) but two of the five individual buckets report being behind on shards. Component Version-Release: ------------------------- 7.6 (Maipo) 3.10.0-957.el7.x86_64 ceph-base.x86_64 2:12.2.8-128.el7cp How reproducible: ---------------- unknown Steps to Reproduce: ------------------ 1. configure multisite on two clusters using ansible playbook 2. populate master site with data, observe sync activity 3. check sync status and bucket sync status Actual results: -------------- sync status complete but individual bucket status behind on shards Expected results: ---------------- sync status and individual buckets status report complete Additional info: --------------- # radosgw-admin sync status realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA) zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07) zone d6499579-6232-4209-b1e4-88112599b5ac (site2) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source # radosgw-admin bucket sync status --bucket=mycontainers1 realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA) zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07) zone d6499579-6232-4209-b1e4-88112599b5ac (site2) bucket mycontainers1[7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2907365.1] source zone 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1) full sync: 0/1 shards incremental sync: 1/1 shards bucket is behind on 1 shards behind shards: [0]