Bug 2143336 - The sync status indicates that "the data is caught up with source" but not all objects are synced
Summary: The sync status indicates that "the data is caught up with source" but not al...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 5.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3
Assignee: Matt Benjamin (redhat)
QA Contact: Hemanth Sai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-16 16:25 UTC by Ameena Suhani S H
Modified: 2023-01-11 17:43 UTC (History)
8 users (show)

Fixed In Version: ceph-16.2.10-92.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-01-11 17:42:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-5623 0 None None None 2022-11-16 16:38:44 UTC
Red Hat Product Errata RHSA-2023:0076 0 None None None 2023-01-11 17:43:38 UTC

Description Ameena Suhani S H 2022-11-16 16:25:35 UTC
Description of problem:
The sync status indicates that "the data is caught up with source" but not all objects are synced

Version-Release number of selected component (if applicable):
ceph version 16.2.10-74.el8cp

How reproducible:
2/2

Steps to Reproduce:
install 4.3z1 with multiple instances and multiple realm
Fill - 7- 10% cluster full with fill
Measure - 1hr hybrid 
age - 7hrs
Delete 1 bucket on both cluster
Upgrade workload- 8hr hybrid with Upgrade to 5.3 build(ceph version 16.2.10-69.el8cp)
Enable sharding only on one realm(RealmOne) on both site
Aging - 8hr hybrid
Measure - 1hr hybrid
Upgrade- 4hrs hybrid with upgrade from 5.3 build(ceph version 16.2.10-69.el8cp) to 5.3 latest build(ceph version 16.2.10-74.el8cp)
12+ hrs no I/O


Actual results:
The sync status indicates that "the data is caught up with source" but not all objects are synced

Expected results:
all the objects should be synced

Comment 17 errata-xmlrpc 2023-01-11 17:42:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0076


Note You need to log in before you can comment on or make changes to this bug.