Bug 1894823 - [RGW-Multisite]: Sync fails to start on a multi realm primary cluster
Summary: [RGW-Multisite]: Sync fails to start on a multi realm primary cluster
Keywords:
Status: CLOSED DUPLICATE of bug 1917687
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 4.2
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
: 4.2z1
Assignee: shilpa
QA Contact: Tejas
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-05 07:51 UTC by Tejas
Modified: 2021-02-10 17:32 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-10 17:32:01 UTC
Embargoed:


Attachments (Terms of Use)

Description Tejas 2020-11-05 07:51:24 UTC
Description of problem:
    I have a primary cluster with 3 realms (2 local and 1 replicated), and data present on all realms. I tried to establish multisite on the replicated realm with another cluster (no local realms here). Sync stuck  with "preparing for full sync" for over 2 days.

Version-Release number of selected component (if applicable):
ceph version 14.2.11-69.el7cp

How reproducible:
Always

Steps to Reproduce:
1. On the primary cluster , create 3 realms, and perform cosbench Io  on all of them. Followed doc steps : https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/object_gateway_configuration_and_administration_guide/index#configuring-multiple-realms-in-the-same-cluster_rgw

2. Filled around 350Tb total on the primary ,across all 3 realms.
3. Started configuring multisite , on  one of the realms (replicated1) to the secondary cluster with the usual steps.
4. Was able to pull realm,period, but data sync failed to start , metadata seems to be synced.
5. In the replicated1 realm primary zone(in-rep1) has a mix of zone specific and realm specific users. Dont know if this makes a difference, just thought of mentioning.



Primary :
f03-h25-000-r620 ~]# radosgw-admin sync status 
          realm 447a9735-853d-4d45-9a7f-8e79d970a3b2 (replicated1)
      zonegroup 096dab5b-9957-4fde-b665-686f8aa1d42b (repzg)
           zone d0393e0d-8837-4933-a5e4-a3fb6534d000 (in-rep1)
  metadata sync no sync (zone is master)
      data sync source: 6e81e072-46c4-4758-a677-1d2ef9bd2d68 (in-rep2)
                        preparing for full sync
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]


Sec: 
f26-h01-000-6048r ~]# radosgw-admin sync status
          realm 447a9735-853d-4d45-9a7f-8e79d970a3b2 (replicated1)
      zonegroup 096dab5b-9957-4fde-b665-686f8aa1d42b (repzg)
           zone 6e81e072-46c4-4758-a677-1d2ef9bd2d68 (in-rep2)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: d0393e0d-8837-4933-a5e4-a3fb6534d000 (in-rep1)
                        preparing for full sync
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]

Comment 4 Adam C. Emerson 2021-02-10 17:32:01 UTC

*** This bug has been marked as a duplicate of bug 1917687 ***


Note You need to log in before you can comment on or make changes to this bug.