Bug 1984559
| Summary: | [IBM Z]: RGW bucket does not exist on Rados Object Gateway though it is present in Openshift Container Storage | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Container Storage | Reporter: | Sravika <sbalusu> | ||||
| Component: | documentation | Assignee: | Agil Antony <agantony> | ||||
| Status: | POST --- | QA Contact: | Elad <ebenahar> | ||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.8 | CC: | agantony, asriram, brgardne, jthottan | ||||
| Target Milestone: | --- | ||||||
| Target Release: | OCS 4.8.3 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | Type: | Bug | |||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Sravika
2021-07-21 15:39:38 UTC
Blaine PTAL at this issue related to OBCs Let's first rule out an issue where the wrong credentials may have been used. AFICT, the endpoint should be correct. @sbalusu please be very explicit. Which secret are you accessing to get the credentials, which specific data are you using from the secret to get the credentials, and how are you specifying the credentials on the commandline to access the bucket? There is no such data item as a RGW_USER_ACCESS_KEY or RGW_USER_SECRET_ACCESS_KEY. Nor is there a RGWUser resource type. I think what you mean is AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for the former, but I'm not sure what you are referencing by "RGW user secret." If you have used credentials from CephObjectStoreUser secret 'rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser' or 'rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user', that would be incorrect. The endpoint would be the same, but those users wouldn't have buckets if they weren't manually created, which would lead to the behavior you are describing. The OBC has a credential secret named 'test-rgw-bucket' and has a separate user created for it. The 'test-rgw-bucket' ConfigMap has the expected bucket name (test-rgw-bucket-eebdac78-4dbe-4dfe-b50b-c2a5b791ad39). @brgardne: Thanks for the clarification, with the correct AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of the obc user the rgw buckets are listed and the creation of RGW as a backingstore of multicloud object gateway was successful. # aws --endpoint http://ocs-storagecluster-cephobjectstore-openshift-storage.apps.ocsm4205001.lnxne.boe --no-verify-ssl s3 ls 2021-07-20 16:11:12 test-rgw-bucket-eebdac78-4dbe-4dfe-b50b-c2a5b791ad39 And yes you are right, I was using the credentials of default CephObjectStoreUser "rook-ceph-object-user-ocs-storagecluster-cephobjectstore-ocs-storagecluster-cephobjectstoreuser" for retrieving the buckets. However, I was only following the documentation of Redhat where it was mentioned to use CephobjectStoreUser credentials for accessing RGW S3 endpoint . https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.7/html/managing_hybrid_and_multicloud_resources/accessing-the-rados-object-gateway-s3-endpoint_rhocs You can also find the terms "RGW USER ACCESS KEY" "RGW USER SECRET ACCESS KEY" and "RGW USER SECRET NAME" in the RH documentation of creating RGW and as a backing store of multicloud object gateway. https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.7/html/managing_hybrid_and_multicloud_resources/adding-storage-resources-for-hybrid-or-multicloud#creating-an-s3-compatible-Multicloud-Object-Gateway-backingstore_rhocs It sounds like we might want to move this to a documentation issue then. Doing that now.
To clarify for the doc team, when creating an ObjectBucketClaim, bucket connection information for the created bucket can be accessed from a ConfigMap with the same name as the ObjectBucketClaim, and access credentials can be found in a Secret with the same name.
I'm having trouble accessing the doc links to verify right now. It's possible this information is documented in a different section, but if not, I hope this is good information to be able to add that.
The ConfigMap has data fields (with examples):
BUCKET_HOST: rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
BUCKET_NAME: test-rgw-bucket-eebdac78-4dbe-4dfe-b50b-c2a5b791ad39
BUCKET_PORT: "80"
BUCKET_REGION: us-east-1
BUCKET_SUBREGION: ""
The Secret has data fields:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
|