Bug 1349285 - Multisite object delete issues
Summary: Multisite object delete issues
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RGW
Version: 2.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 2.0
Assignee: Casey Bodley
QA Contact: shilpa
Depends On:
TreeView+ depends on / blocked
Reported: 2016-06-23 07:15 UTC by shilpa
Modified: 2017-07-30 16:01 UTC (History)
8 users (show)

Fixed In Version: RHEL: ceph-10.2.2-10.el7cp Ubuntu: ceph_10.2.2-8redhat1xenial
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2016-08-23 19:42:19 UTC
Target Upstream Version:

Attachments (Terms of Use)
Peer logs (15.72 MB, application/zip)
2016-06-23 07:15 UTC, shilpa
no flags Details
Master zone logs (9.29 MB, application/zip)
2016-06-23 07:16 UTC, shilpa
no flags Details

System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 16464 0 None None None 2016-06-23 21:28:30 UTC
Red Hat Product Errata RHBA-2016:1755 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.0 bug fix and enhancement update 2016-08-23 23:23:52 UTC

Description shilpa 2016-06-23 07:15:00 UTC
Created attachment 1171317 [details]
Peer logs

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
Upload an object of about 1.5G from master zone. Wait for it to finish uploading. While the object has not finished syncing, delete the object from the master zone. At this point checked the object sync operation has completed and now the object starts to reverse sync thereby re-creating the object on the master zone. 

Introduced a network delay of 200ms between the two rgw nodes. Did the same operation of creating an object on master zone but with a smaller size file of 500MB and deleting it before the sync finishes. This time, the first object create sync operation never completed. The two zones are out of sync:

# radosgw-admin sync status --rgw-zone=us-2 --debug-rgw=0 --debug-ms=0          realm fedc07d8-a4cc-40c0-b8ad-4e1be8251726 (earth)
      zonegroup 4401713c-7fdf-4619-adea-829c5e7fdd0d (us)
           zone 591f5f4f-2b22-4346-ae9c-45c7e37ad5ac (us-2)
  metadata sync syncing
                full sync: 0/64 shards
                metadata is caught up with master
                incremental sync: 64/64 shards
      data sync source: 38b0ab46-20fd-4c94-9f19-193e86c7e343 (us-1)
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 2 shards
                        oldest incremental change not applied: 2016-06-22 07:37:26.0.924151s

Comment 2 shilpa 2016-06-23 07:16:28 UTC
Created attachment 1171318 [details]
Master zone logs

Comment 4 Casey Bodley 2016-06-23 21:28:30 UTC
Work by Yehuda to address this issue was merged upstream in https://github.com/ceph/ceph/pull/9481. I have a small fix to that work in https://github.com/ceph/ceph/pull/9851 that has yet to be merged.

Comment 8 Casey Bodley 2016-06-29 16:30:32 UTC
Ken, all 9 patches for this fix have been cherry-picked to ceph-2-rhel-patches

Comment 11 shilpa 2016-07-12 09:05:37 UTC
The issue is no longer seen from ceph 10.2.2-15

Comment 13 errata-xmlrpc 2016-08-23 19:42:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.