Bug 2125121 - [Metro-DR] when DRpolicy created, s3profile secerets are not getting created for managed clusters
Summary: [Metro-DR] when DRpolicy created, s3profile secerets are not getting created ...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Benamar Mekhissi
QA Contact: krishnaram Karthick
URL:
Whiteboard:
: 2125120 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-08 04:38 UTC by Amarnath
Modified: 2023-08-09 17:00 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-29 13:33:53 UTC
Embargoed:


Attachments (Terms of Use)

Description Amarnath 2022-09-08 04:38:19 UTC
Description of problem (please be detailed as possible and provide log
snippests):
 when DRpolicy created, s3profile secerets are not getting created for managed clusters
Steps Followed:
1. created 3 clusters primary secondary and Hub
2. Followed below document
https://red-hat-storage.github.io/ocs-training/training/ocs4/odf411-metro-ramen.html#_create_data_policy_on_hub_cluster
3. All the steps till chapter 8 were successful
4. After created DR policy. S3 profiles are going to s3ListFailed status

[root@localhost Desktop]# ./oc get drpolicy drpolicy -o jsonpath='{.status.conditions[].reason}{"\n"}'
Succeeded
[root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system
No resources found in openshift-dr-system namespace.
[root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system
No resources found in openshift-dr-system namespace.
[root@localhost Desktop]# ./oc get nodes
NAME                                  STATUS   ROLES           AGE   VERSION
cephqe-node4.lab.eng.blr.redhat.com   Ready    master,worker   29h   v1.24.0+b62823b
[root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system
No resources found in openshift-dr-system namespace.
[root@localhost Desktop]# ./oc get csv,pod -n openshift-dr-system
No resources found in openshift-dr-system namespace.
[root@localhost Desktop]# ./oc get drclusters
NAME            AGE
ocp-mr-2308-1   8h
ocp-mr-2308-2   8h
[root@localhost Desktop]# ./oc get drcluster ocp-mr-2308-1 -o jsonpath='{.status.conditions[2].reason}{"\n"}'
s3ListFailed


 


Version of all relevant components (if applicable):

[root@localhost Desktop]# ./oc version
Client Version: 4.11.0-0.nightly-2022-09-02-184920
Kustomize Version: v4.5.4
Server Version: 4.11.0-0.nightly-2022-09-02-184920
Kubernetes Version: v1.24.0+b62823b

ACM :2.5.1
Ceph : 6.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?
Earlier we use to create odrbucket.yaml with AWS keys.

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 3 Amarnath 2022-09-08 17:00:14 UTC
*** Bug 2125120 has been marked as a duplicate of this bug. ***

Comment 4 Benamar Mekhissi 2022-09-13 12:40:23 UTC
The DRCluster is failing to validate access to the s3Store due to a "certificate signed by an unknown authority" error.
```
Reconciler error        {"reconciler group": "ramendr.openshift.io", "reconciler kind": "DRCluster", "name": "ocp-mr-2308-1", "namespace": "", "error": "drclusters s3Profile validate: s3profile-ocp-mr-2308-1-ocs-external-storagecluster: failed to list objects in bucket odrbucket-040c8ac66894:/ocp-mr-2308-1, RequestError: send request failed\ncaused by: Get \"https://s3-openshift-storage.apps.ocp-mr-2308-1.ceph-qe.rh-ocs.com/odrbucket-040c8ac66894?list-type=2&prefix=%2Focp-mr-2308-1\": x509: certificate signed by unknown authority"}
```

Comment 5 Amarnath 2022-09-16 10:01:29 UTC
There was a indentation issue present in cm-clusters-crt.yaml.
patch Operation with the YAML was successful even though there is an extra line in yaml

Drpolicy validation status was in validated state even though there is a mismatch in certificate addition between the primary secondary and hub clusters.
Can we have a validation step to validate the s3 certificate addition step?

Regards,
Amarnath

Comment 6 rakesh 2022-09-29 13:33:53 UTC
Since this is not a ramen issue and a configuration in creating ssl certs, validation will not happen in ramen. so we are closing this as not a BZ.


Note You need to log in before you can comment on or make changes to this bug.