Bug 1984559

Summary: [IBM Z]: RGW bucket does not exist on Rados Object Gateway though it is present in Openshift Container Storage
Product: [Red Hat Storage] Red Hat OpenShift Container Storage Reporter: Sravika <sbalusu>
Component: documentationAssignee: Agil Antony <agantony>
Status: POST --- QA Contact: Elad <ebenahar>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.8CC: agantony, asriram, brgardne, jthottan
Target Milestone: ---   
Target Release: OCS 4.8.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
rgw object bucket description none

Description Sravika 2021-07-21 15:39:38 UTC
Created attachment 1804186 [details]
rgw object bucket description

Description of problem (please be detailed as possible and provide log
snippests):


The rgw bucket does not exist on rados object gateway, when connecting to rgw endpoint using S3. Due to this, the creation of RGW as a backingstore for multicloud object gateway fails with "TemporaryError Target bucket doesn't exist".



Version of all relevant components (if applicable):

OCP: 4.8.0-rc.3
OCS: 4.8.0-450.ci
LSO: 4.8.0-202106291913
Noobaa: 5.7.0 

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create RGW bucket in openshift container storage
2. Log into the OpenShift Web Console.
3. On the left navigation bar, click Storage → Object Bucket Claims.
4. Click Create Object Bucket Claim:
5. Enter a name for your object bucket claim and select the storage class as ocs-storagecluster-ceph-rgw.
6. Click Create.
7. Fetch the <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> from the RGW user secret and configure with aws.
8. Execute 
aws --endpoint <endpoint> --no-verify-ssl s3 ls

Actual results:


# oc get route -n openshift-storage | grep rgw
ocs-storagecluster-cephobjectstore   ocs-storagecluster-cephobjectstore-openshift-storage.apps.ocsm4205001.lnxne.boe          rook-ceph-rgw-ocs-storagecluster-cephobjectstore   <all>                             None


# aws --endpoint http://ocs-storagecluster-cephobjectstore-openshift-storage.apps.ocsm4205001.lnxne.boe --no-verify-ssl s3 ls
#


]# oc get obc -n openshift-storage
NAME              STORAGE-CLASS                 PHASE   AGE
test-rgw-bucket   ocs-storagecluster-ceph-rgw   Bound   25h
testobc-console   openshift-storage.noobaa.io   Bound   5d8h

Expected results:

RGW buckets should be listed when connected to the rgw endpoint with s3.

Additional info:

https://drive.google.com/file/d/1OFmzDx9iuCnhbdZNnFKxnlh0RSoYxeON/view?usp=sharing

Comment 2 Travis Nielsen 2021-07-21 17:45:13 UTC
Blaine PTAL at this issue related to OBCs

Comment 3 Blaine Gardner 2021-07-21 19:38:29 UTC
Let's first rule out an issue where the wrong credentials may have been used. AFICT, the endpoint should be correct.

@sbalusu please be very explicit. Which secret are you accessing to get the credentials, which specific data are you using from the secret to get the credentials, and how are you specifying the credentials on the commandline to access the bucket?

There is no such data item as a RGW_USER_ACCESS_KEY or RGW_USER_SECRET_ACCESS_KEY. Nor is there a RGWUser resource type. I think what you mean is AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for the former, but I'm not sure what you are referencing by "RGW user secret."

If you have used credentials from CephObjectStoreUser secret 'rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser' or 'rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user', that would be incorrect. The endpoint would be the same, but those users wouldn't have buckets if they weren't manually created, which would lead to the behavior you are describing.

The OBC has a credential secret named 'test-rgw-bucket' and has a separate user created for it. The 'test-rgw-bucket' ConfigMap has the expected bucket name (test-rgw-bucket-eebdac78-4dbe-4dfe-b50b-c2a5b791ad39).

Comment 4 Sravika 2021-07-22 08:51:26 UTC
@brgardne: Thanks for the clarification, with the correct AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of the obc user the rgw buckets are listed and the creation of RGW as a backingstore of multicloud object gateway was successful.

# aws --endpoint http://ocs-storagecluster-cephobjectstore-openshift-storage.apps.ocsm4205001.lnxne.boe --no-verify-ssl s3 ls
2021-07-20 16:11:12 test-rgw-bucket-eebdac78-4dbe-4dfe-b50b-c2a5b791ad39



And yes you are right, I was using the credentials of default CephObjectStoreUser "rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser" for retrieving the buckets.
 
However, I was only following the documentation of Redhat where it was mentioned to use CephobjectStoreUser credentials for accessing RGW S3 endpoint . 

https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.7/html/managing_hybrid_and_multicloud_resources/accessing-the-rados-object-gateway-s3-endpoint_rhocs 

You can also find the terms "RGW USER ACCESS KEY" "RGW USER SECRET ACCESS KEY" and "RGW USER SECRET NAME" in the RH documentation of creating RGW and as a backing store of multicloud object gateway. 

https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.7/html/managing_hybrid_and_multicloud_resources/adding-storage-resources-for-hybrid-or-multicloud#creating-an-s3-compatible-Multicloud-Object-Gateway-backingstore_rhocs

Comment 5 Blaine Gardner 2021-07-22 15:41:05 UTC
It sounds like we might want to move this to a documentation issue then. Doing that now.

To clarify for the doc team, when creating an ObjectBucketClaim, bucket connection information for the created bucket can be accessed from a ConfigMap with the same name as the ObjectBucketClaim, and access credentials can be found in a Secret with the same name. 

I'm having trouble accessing the doc links to verify right now. It's possible this information is documented in a different section, but if not, I hope this is good information to be able to add that.


The ConfigMap has data fields (with examples):
    BUCKET_HOST: rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
    BUCKET_NAME: test-rgw-bucket-eebdac78-4dbe-4dfe-b50b-c2a5b791ad39
    BUCKET_PORT: "80"
    BUCKET_REGION: us-east-1
    BUCKET_SUBREGION: ""

The Secret has data fields:
    AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY