Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1349285

Summary: Multisite object delete issues
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: shilpa <smanjara>
Component: RGWAssignee: Casey Bodley <cbodley>
Status: CLOSED ERRATA QA Contact: shilpa <smanjara>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.0CC: cbodley, ceph-eng-bugs, hnallurv, kbader, kdreyer, mbenjamin, owasserm, sweil
Target Milestone: rc   
Target Release: 2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-10.2.2-10.el7cp Ubuntu: ceph_10.2.2-8redhat1xenial Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-08-23 19:42:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Peer logs
none
Master zone logs none

Description shilpa 2016-06-23 07:15:00 UTC
Created attachment 1171317 [details]
Peer logs

Version-Release number of selected component (if applicable):
ceph-radosgw-10.2.2-5.el7cp.x86_64
curl-7.29.0-31.el7 

How reproducible:
Always

Steps to Reproduce:
1. 
Upload an object of about 1.5G from master zone. Wait for it to finish uploading. While the object has not finished syncing, delete the object from the master zone. At this point checked the object sync operation has completed and now the object starts to reverse sync thereby re-creating the object on the master zone. 


2. 
Introduced a network delay of 200ms between the two rgw nodes. Did the same operation of creating an object on master zone but with a smaller size file of 500MB and deleting it before the sync finishes. This time, the first object create sync operation never completed. The two zones are out of sync:

# radosgw-admin sync status --rgw-zone=us-2 --debug-rgw=0 --debug-ms=0          realm fedc07d8-a4cc-40c0-b8ad-4e1be8251726 (earth)
      zonegroup 4401713c-7fdf-4619-adea-829c5e7fdd0d (us)
           zone 591f5f4f-2b22-4346-ae9c-45c7e37ad5ac (us-2)
  metadata sync syncing
                full sync: 0/64 shards
                metadata is caught up with master
                incremental sync: 64/64 shards
      data sync source: 38b0ab46-20fd-4c94-9f19-193e86c7e343 (us-1)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 2 shards
                        oldest incremental change not applied: 2016-06-22 07:37:26.0.924151s

Comment 2 shilpa 2016-06-23 07:16:28 UTC
Created attachment 1171318 [details]
Master zone logs

Comment 4 Casey Bodley 2016-06-23 21:28:30 UTC
Work by Yehuda to address this issue was merged upstream in https://github.com/ceph/ceph/pull/9481. I have a small fix to that work in https://github.com/ceph/ceph/pull/9851 that has yet to be merged.

Comment 8 Casey Bodley 2016-06-29 16:30:32 UTC
Ken, all 9 patches for this fix have been cherry-picked to ceph-2-rhel-patches

Comment 11 shilpa 2016-07-12 09:05:37 UTC
The issue is no longer seen from ceph 10.2.2-15

Comment 13 errata-xmlrpc 2016-08-23 19:42:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1755.html