Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use Jira Cloud for all bug tracking management.

Bug 1506239

Summary: [RGW-Multisite] - Metadata sync is not happening from primary to secondary.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manohar Murthy <mmurthy>
Component: RGW-MultisiteAssignee: Casey Bodley <cbodley>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.0CC: cbodley, ceph-eng-bugs, ceph-qe-bugs, flucifre, hnallurv, kdreyer, mbenjamin, mmurthy
Target Milestone: rc   
Target Release: 3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.1-36.el7cp Ubuntu: ceph_12.2.1-38redhat1xenial Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-05 23:49:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 26 Manohar Murthy 2017-11-03 11:42:34 UTC
Hi Casey,

Another rgw node(cornell) was updated during upgrade process,not sure why it was giving sync problem.Now I removed extra rgw node and retried same test.
Same problem is happening again.
I have verified same scenario in another 3 setups where no sync errors are seen but some errors are seen in rgw.log.

At this point I am not able to confirm whether it is setup issue or not.

Now setup has only one rgw client in primary and secondary site. Please take a look and see if anything wrong in this. If errors shown in all rgw logs can be ignored then we can close the bug.

Comment 28 Manohar Murthy 2017-11-06 21:48:03 UTC
Metadata sync and Data sync is working fine. No issues seen in sync . so closing the bug.

Version :
[root@fortress ~]# ceph -v
ceph version 12.2.1-37.el7cp (c16105a12f94b5e65d175d620d8d548055c8d490) luminous (stable)
[root@fortress ~]# 

[root@fortress ~]# radosgw-admin sync status --cluster secondary 
          realm 4f0b11dc-3388-4865-aa8c-c93544a37b90 (movies)
      zonegroup aa2114c4-e96f-48b0-a023-825239368582 (us)
           zone 62d6204c-1344-4df6-9803-5424ef394b01 (us-west)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: bf0750fa-3531-4110-a224-2df15c2c445f (us-east)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source
[root@fortress ~]#

Thanks Casey for all your help.

Comment 31 errata-xmlrpc 2017-12-05 23:49:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387