Bug 1744276 - sync status reports complete while bucket status does not
Summary: sync status reports complete while bucket status does not
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 3.2
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: rc
: 4.1
Assignee: Casey Bodley
QA Contact: Uday kurundwade
URL:
Whiteboard:
Depends On:
Blocks: 1727980 1746127
TreeView+ depends on / blocked
 
Reported: 2019-08-21 17:51 UTC by Tim Wilkinson
Modified: 2023-09-14 05:42 UTC (History)
9 users (show)

Fixed In Version: ceph-14.2.4-96.el8cp, ceph-14.2.4-36.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1746127 (view as bug list)
Environment:
Last Closed: 2020-05-19 17:30:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 41412 0 None None None 2019-08-23 17:50:23 UTC
Github ceph ceph pull 29856 0 'None' closed rgw: RGWCoroutine::call(nullptr) sets retcode=0 2021-02-21 17:34:20 UTC
Red Hat Product Errata RHSA-2020:2231 0 None None None 2020-05-19 17:31:12 UTC

Description Tim Wilkinson 2019-08-21 17:51:52 UTC
Description of problem:
----------------------
The secondary site of this ceph 3.2z2 multisite env is reporting its sync status complete (metadata & data caught up with source) but two of the five individual buckets report being behind on shards.



Component Version-Release:
-------------------------
7.6 (Maipo)   3.10.0-957.el7.x86_64
ceph-base.x86_64   2:12.2.8-128.el7cp



How reproducible:
----------------
unknown



Steps to Reproduce:
------------------
1. configure multisite on two clusters using ansible playbook
2. populate master site with data, observe sync activity
3. check sync status and bucket sync status



Actual results:
--------------
sync status complete but individual bucket status behind on shards



Expected results:
----------------
sync status and individual buckets status report complete



Additional info:
---------------
# radosgw-admin sync status
          realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
      zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
           zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source



# radosgw-admin bucket sync status --bucket=mycontainers1
          realm 25eeb4ef-aa47-4f50-85e3-0c9a68883cf1 (scaleLTA)
      zonegroup 053ddd45-7321-4a3f-b9b7-253edc830725 (cloud07)
           zone d6499579-6232-4209-b1e4-88112599b5ac (site2)
         bucket mycontainers1[7659bed4-dcd2-4616-95b3-4f7d971c6dd8.2907365.1]

    source zone 7659bed4-dcd2-4616-95b3-4f7d971c6dd8 (site1)
                full sync: 0/1 shards
                incremental sync: 1/1 shards
                bucket is behind on 1 shards
                behind shards: [0]

Comment 1 RHEL Program Management 2019-08-21 17:51:58 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 44 errata-xmlrpc 2020-05-19 17:30:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:2231

Comment 45 Red Hat Bugzilla 2023-09-14 05:42:06 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.