Bug 2143336

Summary: The sync status indicates that "the data is caught up with source" but not all objects are synced
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ameena Suhani S H <amsyedha>
Component: RGW-MultisiteAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Hemanth Sai <hmaheswa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.3CC: akraj, ceph-eng-bugs, cephqe-warriors, mbenjamin, mkasturi, racpatel, tserlin, vereddy
Target Milestone: ---Keywords: TestBlocker
Target Release: 5.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.10-92.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-01-11 17:42:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ameena Suhani S H 2022-11-16 16:25:35 UTC
Description of problem:
The sync status indicates that "the data is caught up with source" but not all objects are synced

Version-Release number of selected component (if applicable):
ceph version 16.2.10-74.el8cp

How reproducible:
2/2

Steps to Reproduce:
install 4.3z1 with multiple instances and multiple realm
Fill - 7- 10% cluster full with fill
Measure - 1hr hybrid 
age - 7hrs
Delete 1 bucket on both cluster
Upgrade workload- 8hr hybrid with Upgrade to 5.3 build(ceph version 16.2.10-69.el8cp)
Enable sharding only on one realm(RealmOne) on both site
Aging - 8hr hybrid
Measure - 1hr hybrid
Upgrade- 4hrs hybrid with upgrade from 5.3 build(ceph version 16.2.10-69.el8cp) to 5.3 latest build(ceph version 16.2.10-74.el8cp)
12+ hrs no I/O


Actual results:
The sync status indicates that "the data is caught up with source" but not all objects are synced

Expected results:
all the objects should be synced

Comment 17 errata-xmlrpc 2023-01-11 17:42:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0076