Bug 1693445 - rgw-multisite sync stuck recovering shard in already deleted versioned bucket
Summary: rgw-multisite sync stuck recovering shard in already deleted versioned bucket
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW-Multisite
Version: 3.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: z2
: 3.2
Assignee: Casey Bodley
QA Contact: Tejas
Aron Gunn
URL:
Whiteboard:
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2019-03-27 20:42 UTC by Vikhyat Umrao
Modified: 2019-11-11 07:56 UTC (History)
10 users (show)

Fixed In Version: RHEL: ceph-12.2.8-113.el7cp Ubuntu: ceph_12.2.8-96redhat1xenial
Doc Type: Bug Fix
Doc Text:
.Synchronizing a multi-site Ceph Object Gateway was getting stuck When recovering versioned objects, other operations were unable to finish. These stuck operations were caused by the removing of expired `user.rgw.olh.pending` extended attributes (xattrs) all at once on those versioned objects. Another link:https://bugzilla.redhat.com/show_bug.cgi?id=1663570[bug] was causing too many of the `user.rgw.olh.pending` xattrs to be written to those recovering versioned objects. With this release, batches of expired xattrs are removed instead of all at once. This results in versioned objects recovering correctly so other operations can proceed normally.
Clone Of:
Environment:
Last Closed: 2019-04-30 15:57:08 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 39118 0 None None None 2019-04-04 21:03:31 UTC
Red Hat Product Errata RHSA-2019:0911 0 None None None 2019-04-30 15:57:23 UTC

Description Vikhyat Umrao 2019-03-27 20:42:52 UTC
Description of problem:
rgw-multisite sync stuck recovering shard in already deleted versioned bucket

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 3.2.z1
rgw": {
        "ceph version 12.2.8-89.el7cp (2f66ab2fa63b2879913db6d6cf314572a83fd1f0) luminous (stable)": 3


From radosgw-admin sync status:


                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        1 shards are recovering
                        recovering shards: [99]

radosgw-admin data sync status --source-zone=test --shard-id=99
{
    "shard_id": 99,
    "marker": {
        "status": "incremental-sync",
        "marker": "1_1552928154.797517_1199287.1",
        "next_step_marker": "",
        "total_entries": 0,
        "pos": 0,
        "timestamp": "0.000000"
    },
    "pending_buckets": [],
    "recovering_buckets": [
        "testbucket:<bucket id>"
    ]
}

Comment 22 errata-xmlrpc 2019-04-30 15:57:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0911


Note You need to log in before you can comment on or make changes to this bug.