Bug 1695174
| Summary: | rgw: fix eval bucket policies and perms permissions for non-existent objects | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Matt Benjamin (redhat) <mbenjamin> |
| Component: | RGW | Assignee: | Pritha Srivastava <prsrivas> |
| Status: | CLOSED ERRATA | QA Contact: | Tejas <tchandra> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.2 | CC: | agunn, anharris, cbodley, ceph-eng-bugs, kbader, mbenjamin, sweil, tchandra, tserlin |
| Target Milestone: | z2 | ||
| Target Release: | 3.2 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | RHEL: ceph-12.2.8-113.el7cp Ubuntu: ceph_12.2.8-96redhat1xenial | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-04-30 15:57:08 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:0911 |
From upstream tracker: """ Hi, I noticed a bug when accessing Ceph via Hadoop. I am using some shared buckets with read/write access for all users. Here is the policy for the bucket: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAll", "Effect": "Allow", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::<bucket>/*", "arn:aws:s3:::<bucket>" ] } ] } However, if a user different from the owner (or even an anonymous user) does a GetObject/HeadObject on a non existing object, Radosgw returns status code 403 which makes the Hadoop write fail. From the official S3 documentation: If a requested object doesn't exist in the bucket and the requester doesn't have s3:ListBucket access, then the requester receives an HTTP 403 (Access Denied) error rather than the HTTP 404 (Not Found) error. I tried in AWS and a bucket with the same policy returns 404, which should be the correct behaviour since ListBucket is allowed. """