Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1992445

Summary: ms - sync status stuck at '83 shards are recovering' in primary cluster when upgraded to rhcs 5.0
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: RGW-MultisiteAssignee: Casey Bodley <cbodley>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.0CC: cbodley, ceph-eng-bugs, ceph-qe-bugs, mbenjamin, tserlin
Target Milestone: ---   
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.0-115.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:31:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vasishta 2021-08-11 06:32:31 UTC
Description of problem: and Steps to Reproduce:
1) Configured two RHCS 4.x clusters with single RGW daemon per cluster
with RGW MS configuration using ceph-ansible
Both the cluster were baremetal clusters
2) Migrated primary cluster to containerized cluster
Upgraded primary cluster to RHCS 5.0
cephadm-adopt was initiated on primary cluster
3) During first two steps rgw bucket/object (using boto scripts) /user creation and deletion were tried on primary cluster

4) After cephadm-adopt was completed on primary cluster, it was observed that sync status was stuck saying '83 shards are recovering' Even after 13+ hrs of observation, where as 
Secondary site always mentioned data is caught up with source                


Version-Release number of selected component (if applicable):
16.2.0-114.el8cp

How reproducible:
Not yet tried to reproduced
This issue was hit when tried to reproduce Bug 1989849

Actual results:
data sync source: 62995596-f766-4cd3-8399-c85d9d8998e3 (US_WEST)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        83 shards are recovering
                        recovering shards: [0,1,2,8,9,10,11,12,13,14,15,16,17,18,24,25,26,27,28,29,30,31,32,33,34,40,41,42,43,44,45,46,47,48,49,50,56,57,58,59,60,61,62,63,64,65,66,72,74,75,76,77,78,79,80,88,89,90,91,92,93,94,95,96,97,98,104,105,106,107,108,109,110,111,112,113,114,120,121,122,123,124,127]


Expected results:
sync status must get updated accordingly

Additional info:
Secondary site -
data sync source: 6a12b35e-caf9-4cac-aef6-e0d98335537a (US_EAST)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source

Please refer -
- https://bugzilla.redhat.com/show_bug.cgi?id=1989849#c8
- https://bugzilla.redhat.com/show_bug.cgi?id=1989849#c9

Comment 1 RHEL Program Management 2021-08-11 06:32:38 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 10 errata-xmlrpc 2021-08-30 08:31:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294