Bug 1888501 - [RGW MS - MultiSite] : all objects are not synced on secondary site/cluster for one bucket
Summary: [RGW MS - MultiSite] : all objects are not synced on secondary site/cluster f...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 4.2
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 5.*
Assignee: Casey Bodley
QA Contact: Vidushi Mishra
URL:
Whiteboard:
Depends On: 1905369
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-15 04:02 UTC by Rachana Patel
Modified: 2023-01-09 19:43 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4326 0 None None None 2022-05-18 11:18:25 UTC

Description Rachana Patel 2020-10-15 04:02:16 UTC
Description of problem:
=======================
All objects are not synced to secondary site for 1 bucket(mycontainers1) and sync status for bucket says data is in sync.
(517 objects are not present on secondary site)
We waited for more than 14 days but same status.


On Primary site:-
===================
 "num_objects": 1500000

[root@extensa003 ~]# radosgw-admin bucket sync status --bucket mycontainers1
          realm 54018a08-959e-47c8-8647-609d90feb4b7 (movies)
      zonegroup ddffa514-1204-43a7-b4ad-5ab0b2f4d91e (us)
           zone 7da47966-0c26-4908-8493-4bb17c016835 (us-east)
         bucket mycontainers1[7da47966-0c26-4908-8493-4bb17c016835.11026.1]

    source zone f81025a5-443a-4723-a125-8b9cb18a755d (us-west)
                full sync: 0/11 shards
                incremental sync: 11/11 shards
                bucket is caught up with source

On secondary site:-
====================
"num_objects": 1499483
[root@extensa010 ~]# radosgw-admin bucket sync status --bucket mycontainers1
          realm 54018a08-959e-47c8-8647-609d90feb4b7 (movies)
      zonegroup ddffa514-1204-43a7-b4ad-5ab0b2f4d91e (us)
           zone f81025a5-443a-4723-a125-8b9cb18a755d (us-west)
         bucket mycontainers1[7da47966-0c26-4908-8493-4bb17c016835.11026.1]

    source zone 7da47966-0c26-4908-8493-4bb17c016835 (us-east)
                full sync: 0/11 shards
                incremental sync: 11/11 shards
                bucket is caught up with source





Version-Release number of selected component (if applicable):
============================================================
ceph version 14.2.11-26.el7cp

How reproducible:
==================
always

Steps to Reproduce:
==================
1. created 2 cluster and established active-active MS relationship.
2. on replicated zone created 5 buckets. did read, list,write ops on bucket using swift - cosbench
(do deletion of objects but overwrote few objects)
(no resharding or gc/lc ops done, no versioned bucket)

Actual results:
===============
for 1 bucket object number counts are not matching. 517 objects are missing on secondary and sync status is saying "bucket is caught up with source"


On Primary site - "num_objects": 1500000  while On secondary site:-"num_objects": 1499483


Expected results:
================
1) all object should be replicated 
2) bucket status should not say "bucket is caught up with source" if all objects are not replicated



Additional info:


Note You need to log in before you can comment on or make changes to this bug.