.Provisioning object bucket claim with the same bucket name
Previously, for the green field use case, creation of two object bucket claims (OBCs) with the same bucket name was successful from the user interface. Even though two OBCs were created, the second one pointed to invalid credentials.
With this fix, creation of the second OBC with the same bucket name is blocked and it is no longer possible to create two OBCs with the same bucket name for green field use case.
DescriptionDaniel Dominguez
2023-08-03 09:22:55 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Creating OBCs in RadosGW with the same bucket name is completed successfully. Obviously, when the users want to use the bucket they can't access as their credentials are invalid.
Version of all relevant components (if applicable):
OCP: 4.12.25
ODF: 4.12.5
Ceph: 16.2.10-172.el8cp
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No
Is there any workaround available to the best of your knowledge?
Not creating buckets with the same fixed name, but you never know what your end users will do
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1
Can this issue reproducible?
Sure
Can this issue reproduce from the UI?
Yes, when creating OBCs using the OpenShift console
If this is a regression, please provide more details to justify this:
AFAIK this is not a regression
Steps to Reproduce:
1. Create new projects:
$ oc new-project radosgw-users
$ oc new-project radosgw-users-2
2. Create the OBC with the fixed name in RGW:
$ cat << EOF > 01-rgw-obc1.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: rgw-obc1
namespace: radosgw-users
spec:
bucketName: test-bucket
storageClassName: ocs-storagecluster-ceph-rgw
EOF
$ oc create -f 01-rgw-obc1.yaml
$ oc -n radosgw-users get obc
NAME STORAGE-CLASS PHASE AGE
rgw-obc1 ocs-storagecluster-ceph-rgw Bound 65s
$ radosgw-admin bucket stats --bucket test-bucket
...
"owner": "obc-radosgw-users-rgw-obc1",
...
3. Create the same fixed bucket (with a different OBC name) in the same namespace:
$ cat << EOF > 02-rgw-obc2.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: rgw-obc2
namespace: radosgw-users
spec:
bucketName: test-bucket
storageClassName: ocs-storagecluster-ceph-rgw
EOF
$ oc create -f 02-rgw-obc2.yaml
$ oc -n radosgw-users get obc
NAME STORAGE-CLASS PHASE AGE
rgw-obc1 ocs-storagecluster-ceph-rgw Bound 4m11s
rgw-obc2 ocs-storagecluster-ceph-rgw Bound 60s
4. Get the credentials and try to access the test-bucket created by rgw-obc2 OBC:
$ S3_ENDPOINT=$(oc -n openshift-storage get route s3-rgw -o jsonpath='{.spec.host}')
$ RGW_OBC2_ACCESS_KEY=$(oc -n radosgw-users get secrets rgw-obc2 -o go-template='{{.data.AWS_ACCESS_KEY_ID | base64decode }}')
$ RGW_OBC2_SECRET_KEY=$(oc -n radosgw-users get secrets rgw-obc2 -o go-template='{{.data.AWS_SECRET_ACCESS_KEY | base64decode }}')
$ cat << EOF > ~/.rgw-obc2.s3cfg
[default]
access_key = ${RGW_OBC2_ACCESS_KEY}
secret_key = ${RGW_OBC2_SECRET_KEY}
host_base = ${S3_ENDPOINT}
host_bucket = ${S3_ENDPOINT}
use_https = True
EOF
$ s3cmd -c ~/.rgw-obc2.s3cfg ls s3://test-bucket
ERROR: Access to bucket 'test-bucket' was denied
ERROR: S3 error: 403 (AccessDenied)
5. Create the same fixed bucket (with a different OBC name) in a different namespace:
$ cat << EOF > 03-rgw-obc3.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: rgw-obc3
namespace: radosgw-users-2
spec:
bucketName: test-bucket
storageClassName: ocs-storagecluster-ceph-rgw
EOF
$ oc create -f 03-rgw-obc3.yaml
$ oc -n radosgw-users-2 get obc
NAME STORAGE-CLASS PHASE AGE
rgw-obc3 ocs-storagecluster-ceph-rgw Bound 6s
6. Get the credentials and try to access the test-bucket created by rgw-obc3 OBC:
$ S3_ENDPOINT=$(oc -n openshift-storage get route s3-rgw -o jsonpath='{.spec.host}')
$ RGW_OBC3_ACCESS_KEY=$(oc -n radosgw-users-2 get secrets rgw-obc3 -o go-template='{{.data.AWS_ACCESS_KEY_ID | base64decode }}')
$ RGW_OBC3_SECRET_KEY=$(oc -n radosgw-users-2 get secrets rgw-obc3 -o go-template='{{.data.AWS_SECRET_ACCESS_KEY | base64decode }}')
$ cat << EOF > ~/.rgw-obc3.s3cfg
[default]
access_key = ${RGW_OBC3_ACCESS_KEY}
secret_key = ${RGW_OBC3_SECRET_KEY}
host_base = ${S3_ENDPOINT}
host_bucket = ${S3_ENDPOINT}
use_https = True
EOF
$ s3cmd -c ~/.rgw-obc3.s3cfg ls s3://test-bucket
ERROR: Access to bucket 'test-bucket' was denied
ERROR: S3 error: 403 (AccessDenied)
Actual results:
OBCs are provisioned successfully from the end user point of view. But when accessing the bucket they get access denied (as it should be)
Expected results:
OBC creation should fail as there is already a bucket in RadosGW with the same name
Additional info:
When doing the same testing using the NooBaa storage class, the behaviour is correct as the OBC still in Pending state (never in Bound state)
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2024:1383
Description of problem (please be detailed as possible and provide log snippests): Creating OBCs in RadosGW with the same bucket name is completed successfully. Obviously, when the users want to use the bucket they can't access as their credentials are invalid. Version of all relevant components (if applicable): OCP: 4.12.25 ODF: 4.12.5 Ceph: 16.2.10-172.el8cp Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? Not creating buckets with the same fixed name, but you never know what your end users will do Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Sure Can this issue reproduce from the UI? Yes, when creating OBCs using the OpenShift console If this is a regression, please provide more details to justify this: AFAIK this is not a regression Steps to Reproduce: 1. Create new projects: $ oc new-project radosgw-users $ oc new-project radosgw-users-2 2. Create the OBC with the fixed name in RGW: $ cat << EOF > 01-rgw-obc1.yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgw-obc1 namespace: radosgw-users spec: bucketName: test-bucket storageClassName: ocs-storagecluster-ceph-rgw EOF $ oc create -f 01-rgw-obc1.yaml $ oc -n radosgw-users get obc NAME STORAGE-CLASS PHASE AGE rgw-obc1 ocs-storagecluster-ceph-rgw Bound 65s $ radosgw-admin bucket stats --bucket test-bucket ... "owner": "obc-radosgw-users-rgw-obc1", ... 3. Create the same fixed bucket (with a different OBC name) in the same namespace: $ cat << EOF > 02-rgw-obc2.yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgw-obc2 namespace: radosgw-users spec: bucketName: test-bucket storageClassName: ocs-storagecluster-ceph-rgw EOF $ oc create -f 02-rgw-obc2.yaml $ oc -n radosgw-users get obc NAME STORAGE-CLASS PHASE AGE rgw-obc1 ocs-storagecluster-ceph-rgw Bound 4m11s rgw-obc2 ocs-storagecluster-ceph-rgw Bound 60s 4. Get the credentials and try to access the test-bucket created by rgw-obc2 OBC: $ S3_ENDPOINT=$(oc -n openshift-storage get route s3-rgw -o jsonpath='{.spec.host}') $ RGW_OBC2_ACCESS_KEY=$(oc -n radosgw-users get secrets rgw-obc2 -o go-template='{{.data.AWS_ACCESS_KEY_ID | base64decode }}') $ RGW_OBC2_SECRET_KEY=$(oc -n radosgw-users get secrets rgw-obc2 -o go-template='{{.data.AWS_SECRET_ACCESS_KEY | base64decode }}') $ cat << EOF > ~/.rgw-obc2.s3cfg [default] access_key = ${RGW_OBC2_ACCESS_KEY} secret_key = ${RGW_OBC2_SECRET_KEY} host_base = ${S3_ENDPOINT} host_bucket = ${S3_ENDPOINT} use_https = True EOF $ s3cmd -c ~/.rgw-obc2.s3cfg ls s3://test-bucket ERROR: Access to bucket 'test-bucket' was denied ERROR: S3 error: 403 (AccessDenied) 5. Create the same fixed bucket (with a different OBC name) in a different namespace: $ cat << EOF > 03-rgw-obc3.yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgw-obc3 namespace: radosgw-users-2 spec: bucketName: test-bucket storageClassName: ocs-storagecluster-ceph-rgw EOF $ oc create -f 03-rgw-obc3.yaml $ oc -n radosgw-users-2 get obc NAME STORAGE-CLASS PHASE AGE rgw-obc3 ocs-storagecluster-ceph-rgw Bound 6s 6. Get the credentials and try to access the test-bucket created by rgw-obc3 OBC: $ S3_ENDPOINT=$(oc -n openshift-storage get route s3-rgw -o jsonpath='{.spec.host}') $ RGW_OBC3_ACCESS_KEY=$(oc -n radosgw-users-2 get secrets rgw-obc3 -o go-template='{{.data.AWS_ACCESS_KEY_ID | base64decode }}') $ RGW_OBC3_SECRET_KEY=$(oc -n radosgw-users-2 get secrets rgw-obc3 -o go-template='{{.data.AWS_SECRET_ACCESS_KEY | base64decode }}') $ cat << EOF > ~/.rgw-obc3.s3cfg [default] access_key = ${RGW_OBC3_ACCESS_KEY} secret_key = ${RGW_OBC3_SECRET_KEY} host_base = ${S3_ENDPOINT} host_bucket = ${S3_ENDPOINT} use_https = True EOF $ s3cmd -c ~/.rgw-obc3.s3cfg ls s3://test-bucket ERROR: Access to bucket 'test-bucket' was denied ERROR: S3 error: 403 (AccessDenied) Actual results: OBCs are provisioned successfully from the end user point of view. But when accessing the bucket they get access denied (as it should be) Expected results: OBC creation should fail as there is already a bucket in RadosGW with the same name Additional info: When doing the same testing using the NooBaa storage class, the behaviour is correct as the OBC still in Pending state (never in Bound state)