Bug 2259033
Summary: | [MDR] Drcluster annotations doc need to be more generic | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Jenifer Abrams <jhopper> |
Component: | documentation | Assignee: | Kusuma <kbg> |
Status: | ASSIGNED --- | QA Contact: | Neha Berry <nberry> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.14 | CC: | kseeger, muagarwa, odf-bz-bot, rtalur, sheggodu |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | Type: | Bug | |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jenifer Abrams
2024-01-18 18:36:52 UTC
I ran into issues w/ my previous workaround when trying to Failover an application: Warning FailedMount 106s (x3 over 5m53s) kubelet Unable to attach or mount volumes: unmounted volumes=[mypvc], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition Warning FailedMount 61s (x4 over 7m7s) kubelet MountVolume.MountDevice failed for volume "pvc-672bac8d-e076-425f-83f6-ad763491ab17" : fetching NodeStageSecretRef openshift-storage/rook-csi-rbd-node-cluster1-rbdpool failed: kubernetes.io/csi: failed to find the secret rook-csi-rbd-node-cluster1-rbdpool in the namespace openshift-storage with error: secrets "rook-csi-rbd-node-cluster1-rbdpool" not found I would like to confirm if --cluster-name is incompatible w/ DR, or if this secret name needs adjustment? If I follow the current MetroDR docs exactly and use --run-as-user when running the ceph-external-cluster-details-exporter.py script, I reproduce this bug: https://bugzilla.redhat.com/show_bug.cgi?id=2254159 Just to note: I had installed the latest 4.14 ODF Multicluster & Hub operators when I hit this "drcluster.ramendr.openshift.io/storage-secret-name" secret issue. Moving this bug out of 4.16 as we have not worked on the doc changes yet. TODO: update the MDR docs to illustrate what names of the rbd provisioner secret to use to annotate the drcluster. The current doc only works if no cluster name if provided while configuring ODF with the external cluster. @rtalur could please provide the doc text? Thanks! |