Bug 2302940 - [rgw][8.0]: with tenanted users, access to the bucket is denied even after bucket policy is set
Summary: [rgw][8.0]: with tenanted users, access to the bucket is denied even after bu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 8.0
Assignee: Pritha Srivastava
QA Contact: Chaithra
URL:
Whiteboard:
Depends On:
Blocks: 2317218
TreeView+ depends on / blocked
 
Reported: 2024-08-05 19:31 UTC by Hemanth Sai
Modified: 2025-03-26 04:25 UTC (History)
8 users (show)

Fixed In Version: ceph-19.1.0-60.el9cp
Doc Type: Bug Fix
Doc Text:
.Bucket policy evaluations now work as expected and allow cross tenant access for actions that are allowed by the policy Previously, due to an incorrect value bucket tenant, during a bucket policy evaluation access was defined for S3 operations, even if they were explicitly allowed in the bucket policies. As a result, the bucket policy evaluation failed and S3 operations which were marked as allowed by the bucket policy were denied. With this fix, the requested bucket tenant name is correctly passed when getting the bucket policy from the backend store. The tenant is then matched against the bucket tenant which was passed in as part of the S3 operation request, and S3 operations work as expected.
Clone Of:
Environment:
Last Closed: 2024-11-25 09:05:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-9613 0 None None None 2024-08-29 05:41:59 UTC
Red Hat Product Errata RHBA-2024:10216 0 None None None 2024-11-25 09:05:17 UTC

Description Hemanth Sai 2024-08-05 19:31:55 UTC
Description of problem:
access to bucket is denied even after bucket policy is set. this is with respect to tenanted users.
fail log:
http://magna002.ceph.redhat.com/cephci-jenkins/results/openstack/RH/8.0/rhel-9/Regression/19.1.0-3/rgw/3/tier-2_rgw_regression_test/test_bucket_policy_with_multiple_statements_0.log

manual testing fail log:
https://docs.google.com/document/d/1GdRycliNltBvV65dN9JRGJZrTbagZiNGrZi2Eil1N6g/edit#heading=h.j6mcxgg3pl77

with non tenanted users, bucket policy is respected as expected
pass log: https://docs.google.com/document/d/1GdRycliNltBvV65dN9JRGJZrTbagZiNGrZi2Eil1N6g/edit#heading=h.aqneowev7e3k



this behaviour is seen on 8.0, the tests are passing on 7.1
pass log on 7.1: http://magna002.ceph.redhat.com/cephci-jenkins/results/openstack/RH/7.1/rhel-9/Regression/18.2.1-229/rgw/148/tier-2_rgw_regression_test/test_bucket_policy_with_multiple_statements_0.log




Version-Release number of selected component (if applicable):
ceph version 19.1.0-4.el9cp

How reproducible:
always

Steps to Reproduce:
1.create two tenanted users
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ radosgw-admin user create --uid user1 --tenant tenant1 --display-name user1 --access-key xyz1 --secret xyz1 --debug-rgw 0

[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ radosgw-admin user create --uid user1 --tenant tenant2 --display-name user1 --access-key abc100 --secret abc100 --debug-rgw 0

2.configure s3cmd and create bucket under tenant1:user1
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ s3cmd -c .s3cfg_tenant1_user1 mb s3://t1u1bkt1
Bucket 's3://t1u1bkt1/' created
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$

3.set bucket policy to the bucket
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ cat policy.json 
{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": {"AWS": "*"},
         "Action": "s3:*",
         "Resource": "arn:aws:s3:::*"
      }
   ]
}
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ 
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ s3cmd -c .s3cfg_tenant1_user1 setpolicy policy.json s3://t1u1bkt1
s3://t1u1bkt1/: Policy updated
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ 
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ s3cmd -c .s3cfg_tenant1_user1 info s3://t1u1bkt1
s3://t1u1bkt1/ (bucket):
   Location:  default
   Payer:     BucketOwner
   Ownership: none
   Versioning:none
   Expiration rule: none
   Block Public Access: none
   Policy:    {
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": {"AWS": "*"},
         "Action": "s3:*",
         "Resource": "arn:aws:s3:::*"
      }
   ]
}

   CORS:      none
   ACL:       user1: FULL_CONTROL
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ 

4.perform s3 operations from tenant2:user1, they are failing with AccessDenied

[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ s3cmd -c .s3cfg_tenant2_user1 ls s3://tenant1:t1u1bkt1
ERROR: Access to bucket 'tenant1:t1u1bkt1' was denied
ERROR: S3 error: 403 (AccessDenied)
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ 
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ 
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$ s3cmd -c .s3cfg_tenant2_user1 put obj1 s3://tenant1:t1u1bkt1/t2u1obj1
upload: 'obj1' -> 's3://tenant1:t1u1bkt1/t2u1obj1'  [1 of 1]
 10000000 of 10000000   100% in    0s    69.66 MB/s  done
ERROR: S3 error: 403 (AccessDenied)
[cephuser@ceph-hsm-squid-atcvw7-node6 ~]$


Actual results:
all actions which are allowed in bucket policy are failing with AccessDenied in case of tenanted users

Expected results:
bucket policy should be respected in case of tenanted users also

Additional info:
rgw logs at debug_level 20 are present here: http://magna002.ceph.redhat.com/cephci-jenkins/hsm/squid_tenant_users_policy_denied_bz/ceph-client.rgw.rgw.all.ceph-hsm-squid-atcvw7-node5.eenpck.log

Comment 14 errata-xmlrpc 2024-11-25 09:05:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216

Comment 15 Red Hat Bugzilla 2025-03-26 04:25:51 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.