Bug 2228785 - [ODF][OBC] Provisioning of Radosgw OBCs with the same bucket name is successful [NEEDINFO]
Summary: [ODF][OBC] Provisioning of Radosgw OBCs with the same bucket name is successful
Keywords:
Status: NEW
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.12
Hardware: Unspecified
OS: Linux
unspecified
low
Target Milestone: ---
: ---
Assignee: Matt Benjamin (redhat)
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-03 09:22 UTC by Daniel Dominguez
Modified: 2023-08-14 06:22 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
muagarwa: needinfo? (mbenjamin)


Attachments (Terms of Use)

Description Daniel Dominguez 2023-08-03 09:22:55 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Creating OBCs in RadosGW with the same bucket name is completed successfully. Obviously, when the users want to use the bucket they can't access as their credentials are invalid.


Version of all relevant components (if applicable):
OCP: 4.12.25
ODF: 4.12.5
Ceph: 16.2.10-172.el8cp


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?
Not creating buckets with the same fixed name, but you never know what your end users will do


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1


Can this issue reproducible?
Sure

Can this issue reproduce from the UI?
Yes, when creating OBCs using the OpenShift console


If this is a regression, please provide more details to justify this:
AFAIK this is not a regression

Steps to Reproduce:
1. Create new projects:
	$ oc new-project radosgw-users
	$ oc new-project radosgw-users-2
2. Create the OBC with the fixed name in RGW:
		$ cat << EOF > 01-rgw-obc1.yaml 
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: rgw-obc1
  namespace: radosgw-users
spec:
  bucketName: test-bucket
  storageClassName: ocs-storagecluster-ceph-rgw
EOF
		$ oc create -f 01-rgw-obc1.yaml
		$ oc -n radosgw-users get obc
			NAME       STORAGE-CLASS                 PHASE   AGE
			rgw-obc1   ocs-storagecluster-ceph-rgw   Bound   65s
		$ radosgw-admin bucket stats --bucket test-bucket
			...
			"owner": "obc-radosgw-users-rgw-obc1",
			...
3. Create the same fixed bucket (with a different OBC name) in the same namespace:
		$ cat << EOF > 02-rgw-obc2.yaml 
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: rgw-obc2
  namespace: radosgw-users
spec:
  bucketName: test-bucket
  storageClassName: ocs-storagecluster-ceph-rgw
EOF
		$ oc create -f 02-rgw-obc2.yaml
		$ oc -n radosgw-users get obc
			NAME       STORAGE-CLASS                 PHASE   AGE
			rgw-obc1   ocs-storagecluster-ceph-rgw   Bound   4m11s
			rgw-obc2   ocs-storagecluster-ceph-rgw   Bound   60s 
4. Get the credentials and try to access the test-bucket created by rgw-obc2 OBC:
		$ S3_ENDPOINT=$(oc -n openshift-storage get route s3-rgw -o jsonpath='{.spec.host}')
		$ RGW_OBC2_ACCESS_KEY=$(oc -n radosgw-users get secrets rgw-obc2 -o go-template='{{.data.AWS_ACCESS_KEY_ID | base64decode }}')
		$ RGW_OBC2_SECRET_KEY=$(oc -n radosgw-users get secrets rgw-obc2 -o go-template='{{.data.AWS_SECRET_ACCESS_KEY | base64decode }}')
		$ cat << EOF > ~/.rgw-obc2.s3cfg
[default]
access_key = ${RGW_OBC2_ACCESS_KEY}
secret_key = ${RGW_OBC2_SECRET_KEY}
host_base = ${S3_ENDPOINT}
host_bucket = ${S3_ENDPOINT}
use_https = True
EOF
		$ s3cmd -c ~/.rgw-obc2.s3cfg ls s3://test-bucket
			ERROR: Access to bucket 'test-bucket' was denied
			ERROR: S3 error: 403 (AccessDenied)
5. Create the same fixed bucket (with a different OBC name) in a different namespace:
		$ cat << EOF > 03-rgw-obc3.yaml 
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: rgw-obc3
  namespace: radosgw-users-2
spec:
  bucketName: test-bucket
  storageClassName: ocs-storagecluster-ceph-rgw
EOF
		$ oc create -f 03-rgw-obc3.yaml 
		$ oc -n radosgw-users-2 get obc
			NAME       STORAGE-CLASS                 PHASE   AGE
			rgw-obc3   ocs-storagecluster-ceph-rgw   Bound   6s
6. Get the credentials and try to access the test-bucket created by rgw-obc3 OBC:
		$ S3_ENDPOINT=$(oc -n openshift-storage get route s3-rgw -o jsonpath='{.spec.host}')
		$ RGW_OBC3_ACCESS_KEY=$(oc -n radosgw-users-2 get secrets rgw-obc3 -o go-template='{{.data.AWS_ACCESS_KEY_ID | base64decode }}')
		$ RGW_OBC3_SECRET_KEY=$(oc -n radosgw-users-2 get secrets rgw-obc3 -o go-template='{{.data.AWS_SECRET_ACCESS_KEY | base64decode }}')
		$ cat << EOF > ~/.rgw-obc3.s3cfg
[default]
access_key = ${RGW_OBC3_ACCESS_KEY}
secret_key = ${RGW_OBC3_SECRET_KEY}
host_base = ${S3_ENDPOINT}
host_bucket = ${S3_ENDPOINT}
use_https = True
EOF
		$ s3cmd -c ~/.rgw-obc3.s3cfg ls s3://test-bucket
			ERROR: Access to bucket 'test-bucket' was denied
			ERROR: S3 error: 403 (AccessDenied)


Actual results:
OBCs are provisioned successfully from the end user point of view. But when accessing the bucket they get access denied (as it should be)

Expected results:
OBC creation should fail as there is already a bucket in RadosGW with the same name


Additional info:
When doing the same testing using the NooBaa storage class, the behaviour is correct as the OBC still in Pending state (never in Bound state)


Note You need to log in before you can comment on or make changes to this bug.