From upstream tracker: """ Hi, I noticed a bug when accessing Ceph via Hadoop. I am using some shared buckets with read/write access for all users. Here is the policy for the bucket: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAll", "Effect": "Allow", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::<bucket>/*", "arn:aws:s3:::<bucket>" ] } ] } However, if a user different from the owner (or even an anonymous user) does a GetObject/HeadObject on a non existing object, Radosgw returns status code 403 which makes the Hadoop write fail. From the official S3 documentation: If a requested object doesn't exist in the bucket and the requester doesn't have s3:ListBucket access, then the requester receives an HTTP 403 (Access Denied) error rather than the HTTP 404 (Not Found) error. I tried in AWS and a bucket with the same policy returns 404, which should be the correct behaviour since ListBucket is allowed. """
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:0911