Description of problem (please be detailed as possible and provide log snippets): Currently Ramen takes an S3 store configuration and creates buckets per workload that is protected. This is mostly polluting the S3 store and also has the impact that the there could be bucket name collisions, inability to reuse the same S3 store across multiple hub clusters, access rights to ramen operators on the S3 stores to create/delete buckets (where other applications maybe storing additional data). A more correct approach is to use a bucket within an S3 store to ensure the operator only stores relevant information within the bucket, avoiding most of the above problems. A change is requested in the 4.9 time frame to ensure we are backward compatible with data that we store on-disk (in this case s3), rather than bring this change in 4.10 or later. Version of all relevant components (if applicable): 4.9 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? Yes, stay the course, but deal with the problem in later releases with upgrades (which is more complicated). The fix is ready upstream and hence being requested for a backport via this bug. Configuration changes: ---------------------- - Ramen hub and dr-cluster config maps that store S3 related configuration information would need an additional parameter "s3Bucket: <bucket-name>" e.g: ``` - s3ProfileName: mcg-on-east s3CompatibleEndpoint: http://<MCG on east S3 end-point> s3Region: <MCG on east S3 region - OPTIONAL> s3Bucket: <bucket name> s3SecretRef: name: odr-s3secret-east namespace: odr-system ``` - S3 store being used should have the required bucket created, as otherwise writes/reads from the bucket would fail Documentation changes: ---------------------- The above impacts documentation being prepared for the 4.9 release, and needs to be updated with the above additional steps/configuration requirements.