Bug 1458734

Summary: [RGW]: After failover and failback on a multisite setup, period mismatch on both sites seen.
Product: Red Hat Ceph Storage Reporter: Tejas <tchandra>
Component: RGWAssignee: Casey Bodley <cbodley>
Status: CLOSED ERRATA QA Contact: Rachana Patel <racpatel>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 2.3CC: cbodley, ceph-eng-bugs, hnallurv, icolle, kbader, mbenjamin, owasserm, sweil, tchandra, tserlin, vumrao
Target Milestone: rcKeywords: Regression
Target Release: 2.3   
Hardware: Unspecified   
OS: Linux   
Fixed In Version: RHEL: ceph-10.2.7-27.el7cp Ubuntu: ceph_10.2.7-29redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-19 13:33:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Tejas 2017-06-05 11:08:33 UTC
Description of problem:

    On a 2 site setup, I did a failover and failback scenario, with the command exactly as described in the doc:

Version-Release number of selected component (if applicable):
ceph version 10.2.7-26redhat1xenial

How reproducible:

Steps to Reproduce:
1. Create a couple of buckets on both sites to check if they are working fine.
2. Bring down primary(site A), do a zone modify on site B to make it the master.
Do a period update commit and restart radosgw.
3. Create a new bucket on site B, while A is down.
4. Bring back site A, do a period pull from site B, and do period update commit and restart radosgw on site A.
5. Check the sync status on site B, a period mismatch is seen.

I will enable the logs and provide the system details.

Comment 5 Ian Colle 2017-06-05 15:24:02 UTC
Was this same behavior seen on rhel?

Comment 18 Rachana Patel 2017-06-12 16:02:29 UTC
verified with latest version(container) and it worked for container hence moving to verified state

Comment 20 errata-xmlrpc 2017-06-19 13:33:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.