Bug 1724106 - [RGW] RGW-Multisite errors when syncing buckets w/colon and other extended naming
Summary: [RGW] RGW-Multisite errors when syncing buckets w/colon and other extended na...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 6.1
Assignee: Casey Bodley
QA Contact: Madhavi Kasturi
URL:
Whiteboard:
Depends On:
Blocks: 1726135 1727980
TreeView+ depends on / blocked
 
Reported: 2019-06-26 09:21 UTC by Tejas
Modified: 2023-03-26 18:01 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
.Invalid bucket names There are some S3 bucket names that are invalid in AWS, and therefor cannot be replicated by the Ceph Object Gateway multisite. For more information about these bucket names, see the link:https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html[AWS documentation].
Clone Of:
Environment:
Last Closed: 2023-03-26 18:01:01 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 26787 0 'None' closed [rgw]:Validate bucket names as per revised s3 spec 2021-01-07 06:19:18 UTC

Internal Links: 1743388

Description Tejas 2019-06-26 09:21:32 UTC
Description of problem:
   On the long running cluster, we are seeing multisite sync stuck due to the error : " ERROR: failed to parse bucket shard 'swift.bucky.0:55c9e095-9d20-4a0f-91ef-35e786611b6f.188905.1': Expected option value to be integer, got '55c9e095-9d20-4a0f-91ef-35e786611b6f.188905.1'

Version-Release number of selected component (if applicable):
ceph version 12.2.8-128.el7cp

How reproducible:
Always

Steps to Reproduce:
1. This bucket is created as part of our automated suite.


This is a swift bucket :

root@extensa010 ubuntu]# radosgw-admin user info --uid 'tenant$tuffy'
{
    "user_id": "tenant$tuffy",
    "display_name": "tuffy",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "tenant$tuffy:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "tenant$tuffy",
            "access_key": "3OCLNZ3AUfPEA1SABGPT",
            "secret_key": "ByLA2LI9fZG7ZC7YVy0P7fMTFTORB8ML9fLKTTTB"
        },
        {
            "user": "tenant$tuffy",
            "access_key": "64Gf8P3BO4OKP1F2BSUH",
            "secret_key": "DYHMuuBED7D8KQR4GQuD0W18WN8fuYUWD1WUPRPR"
        }
    ],
    "swift_keys": [
        {
            "user": "tenant$tuffy:swift",
            "secret_key": "aojEP7WdrQ8CkqejFM8otxgzVV6q64awHMiso7SY"
        }




]# radosgw-admin metadata list bucket
[
.
.
    "tenant/tenant$tuffy:swift.bucky.0",


]# radosgw-admin metadata list bucket.instance
[
.
.
    "tenant/tenant$tuffy:swift.bucky.0:55c9e095-9d20-4a0f-91ef-35e786611b6f.188905.1",

PRIMARY:
0 ERROR: failed to get bucket instance info for .bucket.meta.tenant/tenant$tuffy:swift.bucky.0:55c9e095-9d20-4a0f-91ef-35e786611b6f.188905.1

SEC:
0 ERROR: failed to parse bucket shard 'swift.bucky.0:55c9e095-9d20-4a0f-91ef-35e786611b6f.188905.1': Expected option value to be integer, got '55c9e095-9d20-4a0f-91ef-35e786611b6f.188905.1'

]# radosgw-admin data sync status --source-zone us-west --shard-id 86
{
    "shard_id": 86,
    "marker": {
        "status": "incremental-sync",
        "marker": "",
        "next_step_marker": "",
        "total_entries": 0,
        "pos": 0,
        "timestamp": "0.000000"
    },
    "pending_buckets": [
        "tenant/tenant$tuffy:swift.bucky.0:55c9e095-9d20-4a0f-91ef-35e786611b6f.188905.1"
    ],
    "recovering_buckets": []
}
[root@extensa004 ubuntu]# radosgw-admin sync status detail
          realm 081290c7-a532-470e-a27a-e532d19b57f2 (movies)
      zonegroup e4460222-1aae-4c86-9c70-721cd9c13d86 (us)
           zone d9ba421f-312f-4fa3-a936-8f1f8cc71932 (us-east)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 55c9e095-9d20-4a0f-91ef-35e786611b6f (us-west)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 1 shards
                        behind shards: [86]
                        oldest incremental change not applied: 2019-06-19 06:54:03.0.417697s



# radosgw-admin bucket list --bucket 'tenant/tenant$tuffy:swift.bucky.0'
[
    {
        "name": "key.tuffy.container.0.29748",
        "instance": "",
        "ver": {
            "pool": 9,
            "epoch": 183491
        },
        "locator": "",
        "exists": "true",
        "meta": {
            "category": 1,
            "size": 10485760,
            "mtime": "2019-06-19 13:13:20.861120Z",
            "etag": "a29936cfc11cd3ce79a1af28a329c4c6",
            "owner": "tenant$tuffy",
            "owner_display_name": "tuffy",
            "content_type": "text/plain",
            "accounted_size": 10485760,
            "user_data": ""
        },
        "tag": "55c9e095-9d20-4a0f-91ef-35e786611b6f.367158.39373",
        "flags": 0,
        "pending_map": [],
        "versioned_epoch": 0
    }
]


Note You need to log in before you can comment on or make changes to this bug.