Bug 1669901 - [RFE] Implement mechanism and command to change/reset bucket objects owner / RGW bucket chown
Summary: [RFE] Implement mechanism and command to change/reset bucket objects owner / ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 3.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: z2
: 3.2
Assignee: shilpa
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-27 20:33 UTC by Vikhyat Umrao
Modified: 2019-04-30 21:26 UTC (History)
11 users (show)

Fixed In Version: RHEL: ceph-12.2.8-116.el7cp Ubuntu: ceph_12.2.8-99redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-30 15:56:46 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4097461 0 Configure None Ceph RGW - How to change ownership of a bucket? 2019-04-30 21:26:24 UTC
Red Hat Product Errata RHSA-2019:0911 0 None None None 2019-04-30 15:57:00 UTC

Description Vikhyat Umrao 2019-01-27 20:33:47 UTC
Description of problem:
ERROR 403 during object GET after running radosgw-admin unlink and link command for the bucket


$ s3cmd -c .s3cfg.rhcs3.testuser2 get s3://foo1/TRN57450887.pdf
download: 's3://foo1/TRN57450887.pdf' -> './TRN57450887.pdf'  [1 of 1]
ERROR: S3 error: 403 (AccessDenied)

2019-01-28 00:19:55.765926 7f3d2cbf6700  1 ====== starting new request req=0x7f3d2cbeff90 =====
2019-01-28 00:19:56.006001 7f3d2cbf6700  1 ====== req done req=0x7f3d2cbeff90 op status=0 http_status=403 ======
2019-01-28 00:19:56.006048 7f3d2cbf6700  1 civetweb: 0x5572d3ed4000: 10.3.117.191 - - [28/Jan/2019:00:19:55 +0530] "GET /foo1/TRN57450887.pdf HTTP/1.1" 403 0 - -

$ s3cmd -c .s3cfg.rhcs3.testuser2 get s3://foo2/TRN57450887.pdf
download: 's3://foo2/TRN57450887.pdf' -> './TRN57450887.pdf'  [1 of 1]
ERROR: S3 error: 403 (AccessDenied)

2019-01-28 00:28:48.470630 7f3d2c3f5700  1 ====== starting new request req=0x7f3d2c3eef90 =====
2019-01-28 00:28:48.741487 7f3d2c3f5700  1 ====== req done req=0x7f3d2c3eef90 op status=0 http_status=403 ======
2019-01-28 00:28:48.741531 7f3d2c3f5700  1 civetweb: 0x5572d3ed9000: 10.3.117.191 - - [28/Jan/2019:00:28:48 +0530] "GET /foo2/TRN57450887.pdf HTTP/1.1" 403 0 - -


$ s3cmd -c .s3cfg.rhcs3.testuser2 get s3://foo3/TRN57450887.pdf
download: 's3://foo3/TRN57450887.pdf' -> './TRN57450887.pdf'  [1 of 1]
ERROR: S3 error: 403 (AccessDenied)

2019-01-28 00:28:57.574446 7f3d2bbf4700  1 ====== starting new request req=0x7f3d2bbedf90 =====
2019-01-28 00:28:57.811423 7f3d2bbf4700  1 ====== req done req=0x7f3d2bbedf90 op status=0 http_status=403 ======
2019-01-28 00:28:57.811469 7f3d2bbf4700  1 civetweb: 0x5572d3ede000: 10.3.117.191 - - [28/Jan/2019:00:28:57 +0530] "GET /foo3/TRN57450887.pdf HTTP/1.1" 403 0


Version-Release number of selected component (if applicable):
RHCS 3.2
# ceph -v
ceph version 12.2.8-52.el7cp (3af3ca15b68572a357593c261f95038d02f46201) luminous (stable)


How reproducible:
Always

Comment 1 Vikhyat Umrao 2019-01-27 20:35:50 UTC
Test cases and test results:
====================================

# ceph -v
ceph version 12.2.8-52.el7cp (3af3ca15b68572a357593c261f95038d02f46201) luminous (stable)

# radosgw-admin user create --uid=testuser1 --display-name="Test User 1" --rgw-realm=test
{
    "user_id": "testuser1",
    "display_name": "Test User 1",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "testuser1",
            "access_key": "2DIK1N2NEZ9UPT8WUIFA",
            "secret_key": "rusFcVjocH33yWYZ3VG4hvIv6IklsIDWzGsWkkw0"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

# radosgw-admin user create --uid=testuser2 --display-name="Test User 2" --rgw-realm=test
{
    "user_id": "testuser2",
    "display_name": "Test User 2",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "testuser2",
            "access_key": "6YZMTEI4CTYL4XFJLCMN",
            "secret_key": "K2M9hGPwFPBNNHabJqi8Ie5B6sNylKA9sMZMs2eB"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

$ s3cmd -c .s3cfg.rhcs3.testuser1 ls
$ s3cmd -c .s3cfg.rhcs3.testuser2 ls

$ s3cmd -c .s3cfg.rhcs3.testuser1 mb s3://foo1
Bucket 's3://foo1/' created

$ s3cmd -c .s3cfg.rhcs3.testuser1 mb s3://foo2
Bucket 's3://foo2/' created

$ s3cmd -c .s3cfg.rhcs3.testuser1 mb s3://foo3
Bucket 's3://foo3/' created

$ s3cmd -c .s3cfg.rhcs3.testuser1 ls          
2019-01-27 18:34  s3://foo1
2019-01-27 18:34  s3://foo2
2019-01-27 18:34  s3://foo3

$ s3cmd -c .s3cfg.rhcs3.testuser1 put Downloads/TRN57450887.pdf s3://foo1
upload: 'Downloads/TRN57450887.pdf' -> 's3://foo1/TRN57450887.pdf'  [1 of 1]
 11437 of 11437   100% in    1s     8.95 kB/s  done

$ s3cmd -c .s3cfg.rhcs3.testuser1 put Downloads/TRN57450887.pdf s3://foo2
upload: 'Downloads/TRN57450887.pdf' -> 's3://foo2/TRN57450887.pdf'  [1 of 1]
 11437 of 11437   100% in    1s    10.43 kB/s  done

$ s3cmd -c .s3cfg.rhcs3.testuser1 put Downloads/TRN57450887.pdf s3://foo3
upload: 'Downloads/TRN57450887.pdf' -> 's3://foo3/TRN57450887.pdf'  [1 of 1]
 11437 of 11437   100% in    1s    10.43 kB/s  done


$ s3cmd -c .s3cfg.rhcs3.testuser1 ls s3://foo1
2019-01-27 18:37     11437   s3://foo1/TRN57450887.pdf

$ s3cmd -c .s3cfg.rhcs3.testuser1 ls s3://foo2
2019-01-27 18:37     11437   s3://foo2/TRN57450887.pdf

$ s3cmd -c .s3cfg.rhcs3.testuser1 ls s3://foo3
2019-01-27 18:37     11437   s3://foo3/TRN57450887.pdf

# radosgw-admin bucket list --rgw-realm=test --uid=testuser1
[
    "foo1",
    "foo2",
    "foo3"
]

# radosgw-admin bucket stats --bucket=foo1 --rgw-realm=test
{
    "bucket": "foo1",
    "zonegroup": "72f3a886-4c70-420b-bc39-7687f072997d",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.1",
    "marker": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.1",
    "index_type": "Normal",
    "owner": "testuser1",
    "ver": "0#1,1#1,2#1,3#3,4#1,5#1,6#1,7#1,8#1,9#1,10#1,11#1,12#1,13#1,14#1,15#1,16#1,17#1,18#1,19#1,20#1,21#1,22#1,23#1,24#1,25#1,26#1,27#1,28#1,29#1,30#1,31#1",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0,24#0,25#0,26#0,27#0,28#0,29#0,30#0,31#0",
    "mtime": "2019-01-28 00:04:52.660004",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#",
    "usage": {
        "rgw.main": {
            "size": 11437,
            "size_actual": 12288,
            "size_utilized": 11437,
            "size_kb": 12,
            "size_kb_actual": 12,
            "size_kb_utilized": 12,
            "num_objects": 1
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

# radosgw-admin bucket unlink --uid testuser1 --bucket foo1 --rgw-realm=test
# radosgw-admin bucket link --uid testuser2 --bucket foo1 --rgw-realm=test

# radosgw-admin bucket unlink --uid testuser1 --bucket foo2 --rgw-realm=test
# radosgw-admin bucket link --uid testuser2 --bucket foo2 --rgw-realm=test

# radosgw-admin bucket unlink --uid testuser1 --bucket foo3 --rgw-realm=test
# radosgw-admin bucket link --uid testuser2 --bucket foo3 --rgw-realm=test

# radosgw-admin bucket list --rgw-realm=test --uid=testuser1
[]

# radosgw-admin bucket list --rgw-realm=test --uid=testuser2
[
    "foo1",
    "foo2",
    "foo3"
]


# radosgw-admin bucket stats --bucket=foo1 --rgw-realm=test
{
    "bucket": "foo1",
    "zonegroup": "72f3a886-4c70-420b-bc39-7687f072997d",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.1",
    "marker": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.1",
    "index_type": "Normal",
    "owner": "testuser2",
    "ver": "0#1,1#1,2#1,3#3,4#1,5#1,6#1,7#1,8#1,9#1,10#1,11#1,12#1,13#1,14#1,15#1,16#1,17#1,18#1,19#1,20#1,21#1,22#1,23#1,24#1,25#1,26#1,27#1,28#1,29#1,30#1,31#1",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0,24#0,25#0,26#0,27#0,28#0,29#0,30#0,31#0",
    "mtime": "2019-01-28 00:13:23.789244",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#",
    "usage": {
        "rgw.main": {
            "size": 11437,
            "size_actual": 12288,
            "size_utilized": 11437,
            "size_kb": 12,
            "size_kb_actual": 12,
            "size_kb_utilized": 12,
            "num_objects": 1
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

# radosgw-admin bucket stats --bucket=foo2 --rgw-realm=test
{
    "bucket": "foo2",
    "zonegroup": "72f3a886-4c70-420b-bc39-7687f072997d",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.2",
    "marker": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.2",
    "index_type": "Normal",
    "owner": "testuser2",
    "ver": "0#1,1#1,2#1,3#3,4#1,5#1,6#1,7#1,8#1,9#1,10#1,11#1,12#1,13#1,14#1,15#1,16#1,17#1,18#1,19#1,20#1,21#1,22#1,23#1,24#1,25#1,26#1,27#1,28#1,29#1,30#1,31#1",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0,24#0,25#0,26#0,27#0,28#0,29#0,30#0,31#0",
    "mtime": "2019-01-28 00:13:53.808991",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#",
    "usage": {
        "rgw.main": {
            "size": 11437,
            "size_actual": 12288,
            "size_utilized": 11437,
            "size_kb": 12,
            "size_kb_actual": 12,
            "size_kb_utilized": 12,
            "num_objects": 1
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

# radosgw-admin bucket stats --bucket=foo3 --rgw-realm=test
{
    "bucket": "foo3",
    "zonegroup": "72f3a886-4c70-420b-bc39-7687f072997d",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.3",
    "marker": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.3",
    "index_type": "Normal",
    "owner": "testuser2",
    "ver": "0#1,1#1,2#1,3#3,4#1,5#1,6#1,7#1,8#1,9#1,10#1,11#1,12#1,13#1,14#1,15#1,16#1,17#1,18#1,19#1,20#1,21#1,22#1,23#1,24#1,25#1,26#1,27#1,28#1,29#1,30#1,31#1",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0,24#0,25#0,26#0,27#0,28#0,29#0,30#0,31#0",
    "mtime": "2019-01-28 00:15:48.857618",
    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#",
    "usage": {
        "rgw.main": {
            "size": 11437,
            "size_actual": 12288,
            "size_utilized": 11437,
            "size_kb": 12,
            "size_kb_actual": 12,
            "size_kb_utilized": 12,
            "num_objects": 1
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}


$ s3cmd -c .s3cfg.rhcs3.testuser1 ls                                     

$ s3cmd -c .s3cfg.rhcs3.testuser2 ls
2019-01-27 18:43  s3://foo1
2019-01-27 18:43  s3://foo2
2019-01-27 18:45  s3://foo3

$ s3cmd -c .s3cfg.rhcs3.testuser2 ls s3://foo1
2019-01-27 18:37     11437   s3://foo1/TRN57450887.pdf

$ s3cmd -c .s3cfg.rhcs3.testuser2 ls s3://foo2
2019-01-27 18:37     11437   s3://foo2/TRN57450887.pdf

$ s3cmd -c .s3cfg.rhcs3.testuser2 ls s3://foo3
2019-01-27 18:37     11437   s3://foo3/TRN57450887.pdf


$ s3cmd -c .s3cfg.rhcs3.testuser2 get s3://foo1/TRN57450887.pdf
download: 's3://foo1/TRN57450887.pdf' -> './TRN57450887.pdf'  [1 of 1]
ERROR: S3 error: 403 (AccessDenied)

2019-01-28 00:19:55.765926 7f3d2cbf6700  1 ====== starting new request req=0x7f3d2cbeff90 =====
2019-01-28 00:19:56.006001 7f3d2cbf6700  1 ====== req done req=0x7f3d2cbeff90 op status=0 http_status=403 ======
2019-01-28 00:19:56.006048 7f3d2cbf6700  1 civetweb: 0x5572d3ed4000: 10.3.117.191 - - [28/Jan/2019:00:19:55 +0530] "GET /foo1/TRN57450887.pdf HTTP/1.1" 403 0 - -

$ s3cmd -c .s3cfg.rhcs3.testuser2 get s3://foo2/TRN57450887.pdf
download: 's3://foo2/TRN57450887.pdf' -> './TRN57450887.pdf'  [1 of 1]
ERROR: S3 error: 403 (AccessDenied)

2019-01-28 00:28:48.470630 7f3d2c3f5700  1 ====== starting new request req=0x7f3d2c3eef90 =====
2019-01-28 00:28:48.741487 7f3d2c3f5700  1 ====== req done req=0x7f3d2c3eef90 op status=0 http_status=403 ======
2019-01-28 00:28:48.741531 7f3d2c3f5700  1 civetweb: 0x5572d3ed9000: 10.3.117.191 - - [28/Jan/2019:00:28:48 +0530] "GET /foo2/TRN57450887.pdf HTTP/1.1" 403 0 - -


$ s3cmd -c .s3cfg.rhcs3.testuser2 get s3://foo3/TRN57450887.pdf
download: 's3://foo3/TRN57450887.pdf' -> './TRN57450887.pdf'  [1 of 1]
ERROR: S3 error: 403 (AccessDenied)

2019-01-28 00:28:57.574446 7f3d2bbf4700  1 ====== starting new request req=0x7f3d2bbedf90 =====
2019-01-28 00:28:57.811423 7f3d2bbf4700  1 ====== req done req=0x7f3d2bbedf90 op status=0 http_status=403 ======
2019-01-28 00:28:57.811469 7f3d2bbf4700  1 civetweb: 0x5572d3ede000: 10.3.117.191 - - [28/Jan/2019:00:28:57 +0530] "GET /foo3/TRN57450887.pdf HTTP/1.1" 403 0



==> Looks like unlink command is not removing the old user information from object metadata. As we can see below.


# radosgw-admin bucket list --bucket=foo1 --rgw-realm=test
[
    {
        "name": "TRN57450887.pdf",
        "instance": "",
        "ver": {
            "pool": 6,
            "epoch": 12
        },
        "locator": "",
        "exists": "true",
        "meta": {
            "category": 1,
            "size": 11437,
            "mtime": "2019-01-27 18:37:22.235043Z",
            "etag": "07ad03f3661ff2b9060ed8cf18e356f1",
            "owner": "testuser1",
            "owner_display_name": "Test User 1",
            "content_type": "application/pdf",
            "accounted_size": 11437,
            "user_data": ""
        },
        "tag": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.8",
        "flags": 0,
        "pending_map": [],
        "versioned_epoch": 0
    }
]

# radosgw-admin bucket list --bucket=foo2 --rgw-realm=test
[
    {
        "name": "TRN57450887.pdf",
        "instance": "",
        "ver": {
            "pool": 6,
            "epoch": 3
        },
        "locator": "",
        "exists": "true",
        "meta": {
            "category": 1,
            "size": 11437,
            "mtime": "2019-01-27 18:37:25.805424Z",
            "etag": "07ad03f3661ff2b9060ed8cf18e356f1",
            "owner": "testuser1",
            "owner_display_name": "Test User 1",
            "content_type": "application/pdf",
            "accounted_size": 11437,
            "user_data": ""
        },
        "tag": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.9",
        "flags": 0,
        "pending_map": [],
        "versioned_epoch": 0
    }
]

# radosgw-admin bucket list --bucket=foo3 --rgw-realm=test
[
    {
        "name": "TRN57450887.pdf",
        "instance": "",
        "ver": {
            "pool": 6,
            "epoch": 13
        },
        "locator": "",
        "exists": "true",
        "meta": {
            "category": 1,
            "size": 11437,
            "mtime": "2019-01-27 18:37:28.645538Z",
            "etag": "07ad03f3661ff2b9060ed8cf18e356f1",
            "owner": "testuser1",
            "owner_display_name": "Test User 1",
            "content_type": "application/pdf",
            "accounted_size": 11437,
            "user_data": ""
        },
        "tag": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.10",
        "flags": 0,
        "pending_map": [],
        "versioned_epoch": 0
    }
]

Comment 14 Vikhyat Umrao 2019-01-30 22:55:47 UTC
The RCA for this issue is While 'bucket link' changes the bucket owner, its object acls still point to the original owner's user id. As you can see in comment#1

# radosgw-admin bucket list --bucket=foo1 --rgw-realm=test
[
    {
        "name": "TRN57450887.pdf",
        "instance": "",
        "ver": {
            "pool": 6,
            "epoch": 12
        },
        "locator": "",
        "exists": "true",
        "meta": {
            "category": 1,
            "size": 11437,
            "mtime": "2019-01-27 18:37:22.235043Z",
            "etag": "07ad03f3661ff2b9060ed8cf18e356f1",
            "owner": "testuser1", <==============================
            "owner_display_name": "Test User 1", <================
            "content_type": "application/pdf",
            "accounted_size": 11437,
            "user_data": ""
        },
        "tag": "a5e44ecd-7aae-4e39-b743-3a709acb60c5.474948.8",
        "flags": 0,
        "pending_map": [],
        "versioned_epoch": 0
    }
]

Comment 16 Vikhyat Umrao 2019-01-31 18:57:46 UTC
The bucket move feature which uses the commands like radosgw-admin unlink/link is only designed for moving the bucket within a user for its different namespaces. Like one example is moving a bucket from non-tenanted to a tenanted user namespace then bucket remains part of the same user but becomes a tenanted bucket. This is going to be a new feature.

[RFE] Implement mechanism and command to change/reset bucket objects owner/RGW bucket chown

Comment 31 errata-xmlrpc 2019-04-30 15:56:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0911


Note You need to log in before you can comment on or make changes to this bug.