Bug 1326290 - Multisite sync stopped/hung after uploading objects
Summary: Multisite sync stopped/hung after uploading objects
Keywords:
Status: CLOSED DUPLICATE of bug 1327142
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 2.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: rc
: 2.0
Assignee: Casey Bodley
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-12 10:57 UTC by shilpa
Modified: 2017-07-30 15:39 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-26 19:06:08 UTC
Embargoed:


Attachments (Terms of Use)
master node logs (9.61 MB, text/plain)
2016-05-02 14:50 UTC, shilpa
no flags Details
Logs (8.48 MB, text/plain)
2016-05-02 14:54 UTC, shilpa
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 15480 0 None None None 2016-04-22 16:41:49 UTC

Description shilpa 2016-04-12 10:57:00 UTC
Description of problem:
Configured active-active multisite with two zones. Verified that the buckets and objects were syncing. Create a multipart object on the master zone. But the object did not get synced to the secondary zone. Any subsequent object creations are being queued and the sync status shows as "syncing"

Version-Release number of selected component (if applicable):
ceph-radosgw-10.1.1-1.el7cp.x86_64

Steps to Reproduce:
1. Configure active active multisite clusters with two zones in it.
2. Create objects/buckets in each zone and verify if they are syncing
3. Try a multipart upload on one of the zones. I tried a 1.5 GB file upload on master zone and then another multipart file from secondary zone. 
4. The sync status is hung for almost an hour. 

# radosgw-admin sync status 
          realm 4e00a610-36e9-43d0-803e-4001442b8232 (earth)
      zonegroup e66e1293-e63b-4afe-9dad-3397647dfb03 (us)
           zone acadcc66-10b9-4829-b8e2-306c0048bff5 (us-1)
  metadata sync no sync (zone is master)
      data sync source: 001da65b-c3a8-42e2-a1ce-79cacefbace2 (us-2)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 2 shards

# radosgw-admin sync status --rgw-zone=us-2
          realm 4e00a610-36e9-43d0-803e-4001442b8232 (earth)
      zonegroup e66e1293-e63b-4afe-9dad-3397647dfb03 (us)
           zone 001da65b-c3a8-42e2-a1ce-79cacefbace2 (us-2)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is behind on 5 shards
                oldest incremental change not applied: 2016-04-12 09:28:08.0.696735s
      data sync source: acadcc66-10b9-4829-b8e2-306c0048bff5 (us-1)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 2 shards
                        oldest incremental change not applied: 2016-04-12 08:39:55.0.614701s


Any subsequent creation of buckets or objects are not syncing.

On master:

]# radosgw-admin bucket list
[
    "container3",
    "container5",
    "new-bucket",
    "container2",
    "bigbucket",
    "container4",
    "my-new-bucket",
    "container"
]

On the peer zone:

# radosgw-admin bucket list
[
    "container3",
    "bigbucket",
    "container4",
    "my-new-bucket",
    "container"
]

From master:
# swift -A http://rgw1:8080/auth/1.0 -U test-user:swift -K 'kzmbCQgR3L5CqjQmvjatXLjeZi1Ss8RFlWLGu1Vj' list bigbucket
big.txt

From peer:
# swift -A http://rgw2:8080/auth/1.0 -U test-user:swift -K 'kzmbCQgR3L5CqjQmvjatXLjeZi1Ss8RFlWLGu1Vj' list bigbucket
f22.iso

I don't find any sync errors in the rgw logs. 


Additional info:

Comment 2 Orit Wasserman 2016-04-12 12:21:27 UTC
Hi slipa,

As this problem should be fixed in the upstream version first,
can you open an upstream issue in http://tracker.ceph.com/
and update this bz with its number

Thanks,
Orit

Comment 3 shilpa 2016-04-13 06:41:29 UTC
(In reply to Orit Wasserman from comment #2)
> Hi slipa,
> 
> As this problem should be fixed in the upstream version first,
> can you open an upstream issue in http://tracker.ceph.com/
> and update this bz with its number
> 
> Thanks,
> Orit

Thanks Orit.
Upstream tracker: http://tracker.ceph.com/issues/15480

Comment 4 Casey Bodley 2016-04-20 15:07:05 UTC
I've been so far unable to reproduce this with multipart uploads in upstream testing. There are two relevant fixes that were merged to master in the past month:
https://github.com/ceph/ceph/pull/8190
https://github.com/ceph/ceph/pull/8453

Comment 5 Christina Meno 2016-04-21 15:13:43 UTC
Would it be possible to get an environment from QE that replicates this issue so that Casey can see it in action?

Comment 6 Casey Bodley 2016-04-21 19:18:25 UTC
I'm still working to reproduce this locally. Are multipart uploads specific to this failure, or have you seen the same sync failures in tests without multipart?

It would help if we could narrow this down to a small set of steps that can consistently reproduce this. Setting these configuration values on all gateways should make the behavior more deterministic, and reduce the amount of log output:

  rgw md log max shards = 1
  rgw data log num shards = 1

If you're unable to reproduce the issues with 1 shard, please try with 8.

Once you've reproduced a case where sync has hung for 5 minutes or more, please provide full logs for both gateways.

Also helpful would be the list of commands used to reproduce, including the radosgw-admin commands to set up the multisite configuration.

Comment 7 shilpa 2016-04-22 12:27:42 UTC
(In reply to Casey Bodley from comment #6)
> I'm still working to reproduce this locally. Are multipart uploads specific
> to this failure, or have you seen the same sync failures in tests without
> multipart?
> 
> It would help if we could narrow this down to a small set of steps that can
> consistently reproduce this. Setting these configuration values on all
> gateways should make the behavior more deterministic, and reduce the amount
> of log output:
> 
>   rgw md log max shards = 1
>   rgw data log num shards = 1
> 
> If you're unable to reproduce the issues with 1 shard, please try with 8.
> 
> Once you've reproduced a case where sync has hung for 5 minutes or more,
> please provide full logs for both gateways.
> 
> Also helpful would be the list of commands used to reproduce, including the
> radosgw-admin commands to set up the multisite configuration.

(In reply to Casey Bodley from comment #4)
> I've been so far unable to reproduce this with multipart uploads in upstream
> testing. There are two relevant fixes that were merged to master in the past
> month:
> https://github.com/ceph/ceph/pull/8190
> https://github.com/ceph/ceph/pull/8453

(In reply to Casey Bodley from comment #6)
> I'm still working to reproduce this locally. Are multipart uploads specific
> to this failure, or have you seen the same sync failures in tests without
> multipart?
> 
> It would help if we could narrow this down to a small set of steps that can
> consistently reproduce this. Setting these configuration values on all
> gateways should make the behavior more deterministic, and reduce the amount
> of log output:
> 
>   rgw md log max shards = 1
>   rgw data log num shards = 1
> 
> If you're unable to reproduce the issues with 1 shard, please try with 8.
> 
> Once you've reproduced a case where sync has hung for 5 minutes or more,
> please provide full logs for both gateways.
> 
> Also helpful would be the list of commands used to reproduce, including the
> radosgw-admin commands to set up the multisite configuration.

Hi Casey,

We are expecting a new downstream build sometime next week. I believe there are quite a few fixes that went into the later builds. I will try to reproduce this in the next build. Does that make sense?

Comment 8 Ken Dreyer (Red Hat) 2016-04-26 21:03:20 UTC
ceph v10.2.0 is available for testing.

Comment 9 shilpa 2016-05-02 14:41:35 UTC
Re-tested this on 10.2.0 with the options:

>   rgw md log max shards = 1
>   rgw data log num shards = 1

I see the same behaviour on even small files. 

Attaching the logs from both the nodes. 

Steps to set up the clusters:

$ radosgw-admin realm create --rgw-realm=earth
$ radosgw-admin zonegroup create --rgw-zonegroup=us
--endpoints=http://rgw1:8080 --master
$ radosgw-admin zonegroup default --rgw-zonegroup=us
$ radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-1
--access-key=${access_key} --secret=${secret}
--endponts=http://rgw1:8080
$ radosgw-admin zone default --rgw-zone=us-1
$ radosgw-admin zonegroup add --rgw-zonegroup=us --rgw-zone=us-1
$ radosgw-admin user create --uid=zone.jup --display-name="Zone User"
--access-key=${access_key} --secret=${secret} --system
$ radosgw-admin period update --commit


$ radosgw-admin realm pull --url=http://rgw1:8080
--access-key=${access_key} --secret=${secret}
$ radosgw-admin realm default --rgw-realm=earth
$ radosgw-admin zonegroup default --rgw-zonegroup=us
$ radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-2
--access-key=${access_key} --secret=${secret} --default
--endpoints=http://rgw2:8080
$ radosgw-admin period update --commit

After restarting both the gateways, created a user called testuser. Created a container and wrote some files to it from master which synced successfully. 

Uploaded a few more files on both master and secondary zones one after the other. This time the files did not sync. A radosgw restart is required to sync them.

Comment 10 shilpa 2016-05-02 14:50:31 UTC
Created attachment 1152981 [details]
master node logs

Comment 11 shilpa 2016-05-02 14:54:26 UTC
Created attachment 1152982 [details]
Logs

Comment 12 Harish NV Rao 2016-05-03 11:34:17 UTC
Please note that this issue is now seen with small objects also. Request to address this issue at the earliest.

Comment 13 Yehuda Sadeh 2016-05-03 17:34:45 UTC
there are some more fixes that haven't made their way into 10.2.0. We're currently testing these and will update when they are ready.

Comment 14 Ken Dreyer (Red Hat) 2016-05-06 02:49:40 UTC
Would you please link to all the in-progress tickets or PRs that are necessary in order to fix this bug, so that we can track the progress?

Comment 15 Casey Bodley 2016-05-06 16:06:13 UTC
To my knowledge, we still haven't reproduced a case where sync stops completely. The bug that we're tracking involves objects that are synced and resynced repeatedly between the master and other zones, which results in the sync status never catching up. This corresponds to http://tracker.ceph.com/issues/15565 and https://github.com/ceph/ceph/pull/8772.

Comment 16 Ken Dreyer (Red Hat) 2016-05-10 20:29:38 UTC
Ceph v10.2.1 is going to be out in the next couple of days. Shilpa, would you please try to reproduce it with this new build once it is available?

Comment 17 shilpa 2016-05-11 07:09:20 UTC
(In reply to Ken Dreyer (Red Hat) from comment #16)
> Ceph v10.2.1 is going to be out in the next couple of days. Shilpa, would
> you please try to reproduce it with this new build once it is available?

Sure will do that.

Comment 18 Ken Dreyer (Red Hat) 2016-05-17 00:14:09 UTC
v10.2.1 is now available in today's puddle, so it's ready for testing.

Comment 19 shilpa 2016-05-25 14:25:38 UTC
I still see the issue in 10.2.1. It is related to https://bugzilla.redhat.com/show_bug.cgi?id=1327142.

Comment 20 Casey Bodley 2016-05-26 19:06:08 UTC

*** This bug has been marked as a duplicate of bug 1327142 ***


Note You need to log in before you can comment on or make changes to this bug.