Bug 1449761 - Objects of bucket with '_' fail to sync in multisite env
Summary: Objects of bucket with '_' fail to sync in multisite env
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 2.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 2.3
Assignee: John Wilkins
QA Contact: shilpa
URL:
Whiteboard:
Depends On:
Blocks: 1437905
TreeView+ depends on / blocked
 
Reported: 2017-05-10 15:30 UTC by shilpa
Modified: 2017-07-11 17:16 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-07-11 17:16:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 19904 0 None None None 2017-05-11 07:50:38 UTC

Description shilpa 2017-05-10 15:30:52 UTC
Description of problem:
Created a bucket '__test__' on source which synced successfully to target. But the objects in them don't sync.

Version-Release number of selected component (if applicable):
ceph-radosgw-10.2.7-13

How reproducible:
Always

Steps to Reproduce:
1. Create bucket with '_'
2. Let the bucket sync 
3. Create objects in it. Check if it syncs

Actual results:
Objects fails to sync

Additional info:

# swift -A http://rgw-master:8080/auth/1.0 -U tj:swift -K "hk7TKCJIvMO3Wj7AaPFP53BAh96uRLek0xt89eIH" list __test__
segmentaa


On source/master:

    {
        "bucket": "__test__",
        "pool": "iny.rgw.buckets.data",
        "index_pool": "iny.rgw.buckets.index",
        "id": "e57026aa-45e4-48fb-9cd3-b51bdd792a2a.292866.4",
        "marker": "e57026aa-45e4-48fb-9cd3-b51bdd792a2a.292866.4",
        "owner": "tj",
        "ver": "0#3",
        "master_ver": "0#0",
        "mtime": "2017-05-10 11:41:32.459948",
        "max_marker": "0#00000000002.21.3",
        "usage": {
            "rgw.main": {
                "size_kb": 512000,
                "size_kb_actual": 512000,
                "num_objects": 1
            }

On target:

   {
        "bucket": "__test__",
        "pool": "miny.rgw.buckets.data",
        "index_pool": "miny.rgw.buckets.index",
        "id": "e57026aa-45e4-48fb-9cd3-b51bdd792a2a.292866.4",
        "marker": "e57026aa-45e4-48fb-9cd3-b51bdd792a2a.292866.4",
        "owner": "tj",
        "ver": "0#1",
        "master_ver": "0#0",
        "mtime": "2017-05-10 11:41:32.459948",
        "max_marker": "0#",
        "usage": {},
        "bucket_quota": {
            "enabled": false,
            "max_size_kb": -1,
            "max_objects": -1
        }

2017-05-10 15:00:29.290963 7f7a8d7ea700 20 cr:s=0x7f7a0021d970:o
p=0x7f7a00421fa0:20RGWContinuousLeaseCR: operate() returned r=-1
6
2017-05-10 15:00:29.290964 7f7a8d7ea700 20 stack->operate() retu
rned ret=-16
2017-05-10 15:00:29.290965 7f7a8d7ea700 20 run: stack=0x7f7a0021
d970 is done
2017-05-10 15:00:29.290968 7f7a8d7ea700 20 cr:s=0x7f7a00106490:o
p=0x7f7a0051f1b0:24RGWBucketShardFullSyncCR: operate()
2017-05-10 15:00:29.290969 7f7a8d7ea700  5 data sync: lease cr f
ailed, done early 
2017-05-10 15:00:29.290971 7f7a8d7ea700 20 cr:s=0x7f7a00106490:op=0x7f7a0051f1b0:24RGWBucketShardFullSyncCR: operate() returned r=-16
2017-05-10 15:00:29.290974 7f7a8d7ea700  5 data sync: Sync:e57026aa:data:BucketFull:__test__:e57026aa-45e4-48fb-9cd3-b51bdd792a2a.292866.4:finish
2017-05-10 15:00:29.290977 7f7a8d7ea700 20 cr:s=0x7f7a00106490:op=0x7f7a006c5b80:25RGWRunBucketSyncCoroutine: operate()
2017-05-10 15:00:29.290978 7f7a8d7ea700  5 data sync: full sync on __test__:e57026aa-45e4-48fb-9cd3-b51bdd792a2a.292866.4 failed, retcode=-16

Comment 3 Orit Wasserman 2017-05-10 19:06:24 UTC
We are failing to sync the bucket instance:
  {
                "id": "1_1494428689.370173_1208544.1",
                "section": "data",
                "name": "__test__:e57026aa-45e4-48fb-9cd3-b51bdd792a2a.292866.4",
                "timestamp": "2017-05-10 15:04:49.370173Z",
                "info": {
                    "source_zone": "e57026aa-45e4-48fb-9cd3-b51bdd792a2a",
                    "error_code": 22,
                    "message": "failed to sync bucket instance: (22) Invalid argument"
                }
            },

in the master:
__test__?versions
2017-05-10 15:00:36.284440 7f49f4ff1700 15 calculated digest=LnXfLGVwpdXJfyimYnPdPIPXu0w=
2017-05-10 15:00:36.284441 7f49f4ff1700 15 auth_sign=LnXfLGVwpdXJfyimYnPdPIPXu0w=
2017-05-10 15:00:36.284442 7f49f4ff1700 15 compare=0
2017-05-10 15:00:36.284443 7f49f4ff1700 20 system request
2017-05-10 15:00:36.284447 7f49f4ff1700  2 req 7759:0.000109:s3:GET /__test__:list_bucket:normalizing buckets and tenants
2017-05-10 15:00:36.284450 7f49f4ff1700 10 s->object=<NULL> s->bucket=__test__
2017-05-10 15:00:36.284452 7f49f4ff1700 10 failed to run post-auth init
2017-05-10 15:00:36.284453 7f49f4ff1700 20 op->ERRORHANDLER: err_no=-2000 new_err_no=-2000
2017-05-10 15:00:36.284508 7f49f4ff1700  2 req 7759:0.000170:s3:GET /__test__:list_bucket:op status=0
2017-05-10 15:00:36.284516 7f49f4ff1700  2 req 7759:0.000178:s3:GET /__test__:list_bucket:http status=400
2017-05-10 15:00:36.284519 7f49f4ff1700  1 ====== req done req=0x7f49f4feb710 op status=0 http_status=400 ======

Comment 4 Orit Wasserman 2017-05-10 19:11:19 UTC
The problem it seems it that __test__ is an invalid S3 bucket name but is a swift valid bucket name.
The sync is done with S3 API and that why it fails.

Comment 5 Matt Benjamin (redhat) 2017-05-10 19:43:59 UTC
(In reply to Orit Wasserman from comment #4)
> The problem it seems it that __test__ is an invalid S3 bucket name but is a
> swift valid bucket name.
> The sync is done with S3 API and that why it fails.

This issue has been raised against the NFS interface, which also validates as S3.

Matt

Comment 6 Orit Wasserman 2017-05-11 07:50:38 UTC
(In reply to Matt Benjamin (redhat) from comment #5)
> (In reply to Orit Wasserman from comment #4)
> > The problem it seems it that __test__ is an invalid S3 bucket name but is a
> > swift valid bucket name.
> > The sync is done with S3 API and that why it fails.
> 
> This issue has been raised against the NFS interface, which also validates
> as S3.
> 
> Matt

For NFS you can call valid_s3_bucket_name with relaxed_names=true, this will allow using '_', '-', and '.' in the file name.

A possible workaround is adding to ceph.conf:
rgw relaxed s3 bucket names = true

Shilpa, can you confirm that helps?

Comment 7 shilpa 2017-05-16 07:11:22 UTC
(In reply to Orit Wasserman from comment #6)
> (In reply to Matt Benjamin (redhat) from comment #5)
> > (In reply to Orit Wasserman from comment #4)
> > > The problem it seems it that __test__ is an invalid S3 bucket name but is a
> > > swift valid bucket name.
> > > The sync is done with S3 API and that why it fails.
> > 
> > This issue has been raised against the NFS interface, which also validates
> > as S3.
> > 
> > Matt
> 
> For NFS you can call valid_s3_bucket_name with relaxed_names=true, this will
> allow using '_', '-', and '.' in the file name.
> 
> A possible workaround is adding to ceph.conf:
> rgw relaxed s3 bucket names = true
> 
> Shilpa, can you confirm that helps?

Hi Orit,

Thanks that worked. Adding 'rgw relaxed s3 bucket names = true' needs to be documented.

Comment 9 shilpa 2017-06-05 07:16:04 UTC
(In reply to John Wilkins from comment #8)
> I added a note to modify the Ceph configuration file. Details here:
> https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-
> Red_Hat_Ceph_Storage_2-Object_Gateway/commit/
> 456a13e589e7e5482d47afcde45aaa58a3f9bb5f#5ce29d591944a94a80c7551b37e20c231f11
> 7e43

Is there an underscore missing in `rgw_relaxed s3_bucket_names` in the doc?

Comment 11 shilpa 2017-06-13 16:55:59 UTC
(In reply to John Wilkins from comment #10)
> Fixed. See
> https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-
> Red_Hat_Ceph_Storage_2-Object_Gateway/commit/
> d8f25b838484e87e47570776e02a5c5b374820c9

Thanks.. Is this line 228 related? "Save the `/etc/ganesha/ganesha.conf` configuration file". Because it should be ceph.conf

Comment 12 John Wilkins 2017-06-16 17:09:09 UTC
There are two configuration files. The Ceph configuration file, which is where the Ceph settings go. There is also a ganesha.conf file. They are in separate directories, and the ganesha.conf file points to the ceph.conf file.

Comment 13 shilpa 2017-06-19 07:45:58 UTC
(In reply to John Wilkins from comment #12)
> There are two configuration files. The Ceph configuration file, which is
> where the Ceph settings go. There is also a ganesha.conf file. They are in
> separate directories, and the ganesha.conf file points to the ceph.conf file.

Right, but since this is a ceph configuration change, I think it should just contain reference to ceph.conf to avoid confusion.

Comment 14 John Wilkins 2017-06-27 16:17:06 UTC
I've moved all ceph.conf comments to step 2 to avoid confusion between editing ceph.conf and ganesha.conf.

https://access.qa.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/object_gateway_guide_for_red_hat_enterprise_linux/#exporting_the_namespace_to_nfs_ganesha


Note You need to log in before you can comment on or make changes to this bug.