Description of problem (please be detailed as possible and provide log snippests): when DRpolicy created, s3profile secerets are not getting created for managed clusters Steps Followed: 1. created 3 clusters primary secondary and Hub 2. Followed below document https://red-hat-storage.github.io/ocs-training/training/ocs4/odf411-metro-ramen.html#_create_data_policy_on_hub_cluster 3. All the steps till chapter 8 were successful 4. After created DR policy. S3 profiles are going to s3ListFailed status [root@localhost Desktop]# ./oc get drpolicy drpolicy -o jsonpath='{.status.conditions[].reason}{"\n"}' Succeeded [root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system No resources found in openshift-dr-system namespace. [root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system No resources found in openshift-dr-system namespace. [root@localhost Desktop]# ./oc get nodes NAME STATUS ROLES AGE VERSION cephqe-node4.lab.eng.blr.redhat.com Ready master,worker 29h v1.24.0+b62823b [root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system No resources found in openshift-dr-system namespace. [root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system No resources found in openshift-dr-system namespace. [root@localhost Desktop]# ./oc get drclusters NAME AGE ocp-mr-2308-1 8h ocp-mr-2308-2 8h [root@localhost Desktop]# ./oc get drcluster ocp-mr-2308-1 -o jsonpath='{.status.conditions[2].reason}{"\n"}' s3ListFailed Version of all relevant components (if applicable): [root@localhost Desktop]# ./oc version Client Version: 4.11.0-0.nightly-2022-09-02-184920 Kustomize Version: v4.5.4 Server Version: 4.11.0-0.nightly-2022-09-02-184920 Kubernetes Version: v1.24.0+b62823b ACM :2.5.1 Ceph : 6.0 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Earlier we use to create odrbucket.yaml with AWS keys. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
*** Bug 2125120 has been marked as a duplicate of this bug. ***
The DRCluster is failing to validate access to the s3Store due to a "certificate signed by an unknown authority" error. ``` Reconciler error {"reconciler group": "ramendr.openshift.io", "reconciler kind": "DRCluster", "name": "ocp-mr-2308-1", "namespace": "", "error": "drclusters s3Profile validate: s3profile-ocp-mr-2308-1-ocs-external-storagecluster: failed to list objects in bucket odrbucket-040c8ac66894:/ocp-mr-2308-1, RequestError: send request failed\ncaused by: Get \"https://s3-openshift-storage.apps.ocp-mr-2308-1.ceph-qe.rh-ocs.com/odrbucket-040c8ac66894?list-type=2&prefix=%2Focp-mr-2308-1\": x509: certificate signed by unknown authority"} ```
There was a indentation issue present in cm-clusters-crt.yaml. patch Operation with the YAML was successful even though there is an extra line in yaml Drpolicy validation status was in validated state even though there is a mismatch in certificate addition between the primary secondary and hub clusters. Can we have a validation step to validate the s3 certificate addition step? Regards, Amarnath
Since this is not a ramen issue and a configuration in creating ssl certs, validation will not happen in ramen. so we are closing this as not a BZ.