Bug 2239921 - [rgw-ms][archive]: The overall sync status shows behind shards in the archive zone when executing hybrid workload for 7 hours on the primary and secondary site.
Summary: [rgw-ms][archive]: The overall sync status shows behind shards in the archive...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 6.1
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 7.0
Assignee: shilpa
QA Contact: Vidushi Mishra
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2237662
TreeView+ depends on / blocked
 
Reported: 2023-09-20 19:47 UTC by Vidushi Mishra
Modified: 2024-05-25 04:25 UTC (History)
7 users (show)

Fixed In Version: ceph-18.2.0-99.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-12-13 15:23:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7501 0 None None None 2023-09-20 19:49:43 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:23:36 UTC

Description Vidushi Mishra 2023-09-20 19:47:56 UTC
Description of problem:

We are observing the overall sync status behind shards in the archive zone when executing a hybrid workload for 7 hours on the primary and secondary sites.

Version-Release number of selected component (if applicable):

ceph version 17.2.6-142.el9cp

How reproducible:
1/1

Steps to Reproduce:
1. created a multisite with an archive site (latency between sites is around 100ms)
2. create 6 buckets cosbkt-{1..6} and upload around 1.6M objects on each bucket bidirectionally (from primary and secondary sites.)
3. Perform a 7-hour hybrid workload from the primary and secondary sites
4. Wait for the sites to synchronize.


Actual results:

- We observed that the data and sync status are consistent on the primary and secondary sites.

- However, on the archive site, the overall sync status (which is the way to determine the consistency on the archive zone) shows behind shards.
 

---------------- snippet ---------------------

[root@folio21 ~]#  radosgw-admin sync status
          realm b8ac4103-bb1e-464a-908c-be4c470c0277 (india)
      zonegroup 677cecfc-573d-4faa-b497-b185b2ab00ea (shared)
           zone 12e27b3d-e235-48af-bdd8-fabaed502366 (archive)
   current time 2023-09-20T14:57:30Z
zonegroup features enabled: compress-encrypted,resharding
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 1108c11d-d1fc-4ba6-8689-ed2d3fdfd6e5
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 58 shards
                        behind shards: [0,1,2,23,24,25,27,28,29,30,31,32,33,34,35,36,37,38,40,41,43,44,45,46,47,48,49,51,53,54,56,57,58,60,61,66,68,70,71,72,74,76,77,78,80,81,84,92,99,101,102,103,108,110,115,121,123,127]
                        oldest incremental change not applied: 2023-09-16T15:43:41.006328+0000 [0]
                source: 8fb180d2-5e68-4dce-b3e0-f55183dd6336
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 117 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,28,29,30,31,32,33,34,35,39,40,41,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,64,65,66,67,68,69,70,72,73,74,75,76,77,78,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,103,104,105,106,107,108,109,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
                        oldest incremental change not applied: 2023-09-16T15:43:22.335907+0000 [0]


Expected results:


Data should be caught up on the archive zone as well via the sync status

Additional info:

Comment 12 errata-xmlrpc 2023-12-13 15:23:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780

Comment 13 Red Hat Bugzilla 2024-05-25 04:25:10 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.