Created attachment 2002929 [details] DR_cluster_operator_failed_state. Description of problem (please be detailed as possible and provide log snippests): Both primary and secondary clusters were installed DR cluster operator, later both were connected via submariner with globalnet as they had overlapping network. Post that they were added to AHCM to a cluster set. From this time I am seeing this operator in failed state. oc get csv,pod -n openshift-dr-system NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.14.0-rhodf Openshift DR Cluster Operator 4.14.0-rhodf Failed clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-556d9d484f-zw5f8 2/2 Running 458 (19h ago) 5d21h oc describe csv clusterserviceversion.operators.coreos.com odr-cluster-operator.v4.14.0-rhodf -n openshift-dr-system Name: odr-cluster-operator.v4.14.0-rhodf Namespace: openshift-dr-system Labels: full_version=4.14.0-161 operatorframework.io/arch.amd64=supported operatorframework.io/arch.ppc64le=supported operatorframework.io/arch.s390x=supported operators.coreos.com/odr-cluster-operator.openshift-dr-system= Annotations: alm-examples: [ { "apiVersion": "ramendr.openshift.io/v1alpha1", "kind": "VolumeReplicationGroup", "metadata": { "name": "volumereplicationgroup-sample" }, "spec": { "async": { "schedulingInterval": "10m" }, "kubeObjectProtection": { "captureInterval": "1m", "captureOrder": [ { "includedResources": [ "ConfigMap", "Secret" ], "name": "config" }, { "includedResources": [ "sample1.cpd.ibm.com", "sample2.cpd.ibm.com", "sample3.cpd.ibm.com" ], "name": "cpd" }, { "includedResources": [ "Deployment" ], "name": "deployments" }, { "excludedResources": [ "" ], "includeClusterResources": true, "name": "everything" } ], "recoverOrder": [ { "backupName": "config", "includeClusterResources": true, "includedResources": [ "ConfigMap", "Secret" ] }, { "backupName": "cpd", "includedResources": [ "sample1.cpd.ibm.com", "sample2.cpd.ibm.com", "sample3.cpd.ibm.com" ] }, { "backupName": "deployments", "includedResources": [ "Deployment" ] }, { "backupName": "everything", "excludedResources": [ "ConfigMap", "Secret", "Deployment", "sample1.cpd.ibm.com", "sample2.cpd.ibm.com", "sample3.cpd.ibm.com" ] } ] }, "pvcSelector": { "matchLabels": { "any-pvc-label": "value" } }, "replicationState": "primary", "s3Profiles": [ "s3-profile-of-east", "s3-profile-of-west" ] } } ] capabilities: Basic Install olm.operatorGroup: ramen-operator-group olm.operatorNamespace: openshift-dr-system olm.skipRange: >=4.2.0 <4.14.0-rhodf olm.targetNamespaces: operatorframework.io/properties: {"properties":[{"type":"olm.gvk","value":{"group":"ramendr.openshift.io","kind":"MaintenanceMode","version":"v1alpha1"}},{"type":"olm.gvk"... operatorframework.io/suggested-namespace: openshift-dr-system operators.openshift.io/infrastructure-features: ["disconnected"] operators.operatorframework.io/builder: operator-sdk-v1.24.0 operators.operatorframework.io/project_layout: go.kubebuilder.io/v3 API Version: operators.coreos.com/v1alpha1 Kind: ClusterServiceVersion Metadata: Creation Timestamp: 2023-11-30T13:19:23Z Generation: 1 Managed Fields: API Version: operators.coreos.com/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:alm-examples: f:capabilities: f:olm.skipRange: f:operatorframework.io/properties: f:operatorframework.io/suggested-namespace: f:operators.openshift.io/infrastructure-features: f:operators.operatorframework.io/builder: f:operators.operatorframework.io/project_layout: f:labels: .: f:full_version: f:operatorframework.io/arch.amd64: f:operatorframework.io/arch.ppc64le: f:operatorframework.io/arch.s390x: f:spec: .: f:apiservicedefinitions: f:cleanup: .: f:enabled: f:customresourcedefinitions: .: f:owned: f:description: f:displayName: f:icon: f:install: .: f:spec: .: f:clusterPermissions: f:deployments: f:permissions: f:strategy: f:installModes: f:keywords: f:links: f:maintainers: f:maturity: f:provider: .: f:name: f:relatedImages: f:version: Manager: catalog Operation: Update Time: 2023-11-30T13:19:23Z API Version: operators.coreos.com/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:labels: f:operators.coreos.com/odr-cluster-operator.openshift-dr-system: Manager: Go-http-client Operation: Update Time: 2023-11-30T13:19:25Z API Version: operators.coreos.com/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:status: .: f:cleanup: f:conditions: f:lastTransitionTime: f:lastUpdateTime: f:message: f:phase: f:reason: f:requirementStatus: Manager: olm Operation: Update Subresource: status Time: 2023-12-05T15:09:50Z API Version: operators.coreos.com/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: f:olm.operatorGroup: f:olm.operatorNamespace: f:olm.targetNamespaces: Manager: olm Operation: Update Time: 2023-12-06T16:06:19Z Resource Version: 7418543 UID: 9a3f1d84-3a61-4a5e-9a28-2d2d761022eb Spec: Apiservicedefinitions: Cleanup: Enabled: false Customresourcedefinitions: Owned: Kind: MaintenanceMode Name: maintenancemodes.ramendr.openshift.io Version: v1alpha1 Kind: ProtectedVolumeReplicationGroupList Name: protectedvolumereplicationgrouplists.ramendr.openshift.io Version: v1alpha1 Description: VolumeReplicationGroup is the Schema for the volumereplicationgroups API Display Name: Volume Replication Group Kind: VolumeReplicationGroup Name: volumereplicationgroups.ramendr.openshift.io Version: v1alpha1 Description: OpenShift DR Cluster is a disaster-recovery orchestrator for stateful applications, that operates from an Advanced Cluster Management (ACM) managed cluster and is controlled by Openshift DR Hub operator to orchestrate the life-cycle of an application, and its state on the managed cluster. Display Name: Openshift DR Cluster Operator Icon: base64data: PHN2ZyBpZD0iTGF5ZXJfMSIgZGF0YS1uYW1lPSJMYXllciAxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxOTIgMTQ1Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2UwMDt9PC9zdHlsZT48L2RlZnM+PHRpdGxlPlJlZEhhdC1Mb2dvLUhhdC1Db2xvcjwvdGl0bGU+PHBhdGggZD0iTTE1Ny43Nyw2Mi42MWExNCwxNCwwLDAsMSwuMzEsMy40MmMwLDE0Ljg4LTE4LjEsMTcuNDYtMzAuNjEsMTcuNDZDNzguODMsODMuNDksNDIuNTMsNTMuMjYsNDIuNTMsNDRhNi40Myw2LjQzLDAsMCwxLC4yMi0xLjk0bC0zLjY2LDkuMDZhMTguNDUsMTguNDUsMCwwLDAtMS41MSw3LjMzYzAsMTguMTEsNDEsNDUuNDgsODcuNzQsNDUuNDgsMjAuNjksMCwzNi40My03Ljc2LDM2LjQzLTIxLjc3LDAtMS4wOCwwLTEuOTQtMS43My0xMC4xM1oiLz48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik0xMjcuNDcsODMuNDljMTIuNTEsMCwzMC42MS0yLjU4LDMwLjYxLTE3LjQ2YTE0LDE0LDAsMCwwLS4zMS0zLjQybC03LjQ1LTMyLjM2Yy0xLjcyLTcuMTItMy4yMy0xMC4zNS0xNS43My0xNi42QzEyNC44OSw4LjY5LDEwMy43Ni41LDk3LjUxLjUsOTEuNjkuNSw5MCw4LDgzLjA2LDhjLTYuNjgsMC0xMS42NC01LjYtMTcuODktNS42LTYsMC05LjkxLDQuMDktMTIuOTMsMTIuNSwwLDAtOC40MSwyMy43Mi05LjQ5LDI3LjE2QTYuNDMsNi40MywwLDAsMCw0Mi41Myw0NGMwLDkuMjIsMzYuMywzOS40NSw4NC45NCwzOS40NU0xNjAsNzIuMDdjMS43Myw4LjE5LDEuNzMsOS4wNSwxLjczLDEwLjEzLDAsMTQtMTUuNzQsMjEuNzctMzYuNDMsMjEuNzdDNzguNTQsMTA0LDM3LjU4LDc2LjYsMzcuNTgsNTguNDlhMTguNDUsMTguNDUsMCwwLDEsMS41MS03LjMzQzIyLjI3LDUyLC41LDU1LC41LDc0LjIyYzAsMzEuNDgsNzQuNTksNzAuMjgsMTMzLjY1LDcwLjI4LDQ1LjI4LDAsNTYuNy0yMC40OCw1Ni43LTM2LjY1LDAtMTIuNzItMTEtMjcuMTYtMzAuODMtMzUuNzgiLz48L3N2Zz4= Mediatype: image/svg+xml Install: Spec: Cluster Permissions: Rules: API Groups: Resources: configmaps Verbs: list watch API Groups: Resources: events Verbs: create get patch update API Groups: Resources: pods Verbs: get list watch API Groups: Resources: persistentvolumeclaims Verbs: create delete get list patch update watch API Groups: Resources: persistentvolumes Verbs: create get list patch update watch API Groups: ramendr.openshift.io Resources: protectedvolumereplicationgrouplists Verbs: create delete get list patch update watch API Groups: ramendr.openshift.io Resources: protectedvolumereplicationgrouplists/finalizers Verbs: update API Groups: ramendr.openshift.io Resources: protectedvolumereplicationgrouplists/status Verbs: get patch update API Groups: ramendr.openshift.io Resources: volumereplicationgroups Verbs: create delete get list patch update watch API Groups: ramendr.openshift.io Resources: volumereplicationgroups/finalizers Verbs: update API Groups: ramendr.openshift.io Resources: volumereplicationgroups/status Verbs: get patch update API Groups: replication.storage.openshift.io Resources: volumereplications Verbs: create delete get list patch update watch API Groups: replication.storage.openshift.io Resources: volumereplicationclasses Verbs: get list watch API Groups: storage.k8s.io Resources: storageclasses Verbs: create get list update watch API Groups: storage.k8s.io Resources: volumeattachments Verbs: get list watch API Groups: multicluster.x-k8s.io Resources: serviceexports Verbs: create delete get list patch update watch API Groups: velero.io Resources: backups Verbs: create delete deletecollection get list patch update watch API Groups: velero.io Resources: backups/status Verbs: get API Groups: velero.io Resources: backupstoragelocations Verbs: create delete deletecollection get patch update API Groups: velero.io Resources: restores Verbs: create delete deletecollection get list patch update watch API Groups: velero.io Resources: restores/status Verbs: get API Groups: volsync.backube Resources: replicationdestinations Verbs: create delete get list patch update watch API Groups: volsync.backube Resources: replicationsources Verbs: create delete get list patch update watch API Groups: Resources: secrets Verbs: create delete get list patch update watch API Groups: snapshot.storage.k8s.io Resources: volumesnapshotclasses Verbs: get list watch API Groups: snapshot.storage.k8s.io Resources: volumesnapshots Verbs: delete get list update watch API Groups: ramendr.openshift.io Resources: recipes Verbs: get list watch API Groups: authentication.k8s.io Resources: tokenreviews Verbs: create API Groups: authorization.k8s.io Resources: subjectaccessreviews Verbs: create Service Account Name: ramen-dr-cluster-operator Deployments: Label: App: ramen-dr-cluster Control - Plane: controller-manager Name: ramen-dr-cluster-operator Spec: Replicas: 1 Selector: Match Labels: App: ramen-dr-cluster Control - Plane: controller-manager Strategy: Template: Metadata: Annotations: kubectl.kubernetes.io/default-container: manager Creation Timestamp: <nil> Labels: App: ramen-dr-cluster Control - Plane: controller-manager Spec: Containers: Args: --config=/config/ramen_manager_config.yaml Command: /manager Env: Name: POD_NAMESPACE Value From: Field Ref: Field Path: metadata.namespace Image: registry.redhat.io/odf4/odr-rhel9-operator@sha256:da0137ce75b86ee906d14253f77451f93b3780651ae26344d0fbdb20ffde8759 Image Pull Policy: IfNotPresent Liveness Probe: Http Get: Path: /healthz Port: 8081 Initial Delay Seconds: 15 Period Seconds: 20 Name: manager Readiness Probe: Http Get: Path: /readyz Port: 8081 Initial Delay Seconds: 5 Period Seconds: 10 Resources: Limits: Cpu: 100m Memory: 300Mi Requests: Cpu: 100m Memory: 200Mi Security Context: Allow Privilege Escalation: false Volume Mounts: Mount Path: /etc/pki/ca-trust/extracted/pem Name: ramen-manager-trustedca-vol Read Only: true Mount Path: /config Name: ramen-manager-config-vol Read Only: true Args: --secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:9289/ --logtostderr=true --v=10 Image: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:1dddb0988d1612c996707d43eb839bc49fc7e7554afaf085436eeddb37a12438 Name: kube-rbac-proxy Ports: Container Port: 8443 Name: https Protocol: TCP Resources: Limits: Cpu: 500m Memory: 128Mi Requests: Cpu: 5m Memory: 64Mi Security Context: Run As Non Root: true Service Account Name: ramen-dr-cluster-operator Termination Grace Period Seconds: 10 Volumes: Config Map: Items: Key: ca-bundle.crt Path: tls-ca-bundle.pem Name: openshift-trusted-cabundle Name: ramen-manager-trustedca-vol Config Map: Name: ramen-dr-cluster-operator-config Name: ramen-manager-config-vol Permissions: Rules: API Groups: Resources: configmaps Verbs: get list watch create update patch delete API Groups: coordination.k8s.io Resources: leases Verbs: get list watch create update patch delete API Groups: Resources: events Verbs: create patch API Groups: Resources: secrets Verbs: get Service Account Name: ramen-dr-cluster-operator Strategy: deployment Install Modes: Supported: false Type: OwnNamespace Supported: false Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Keywords: Storage Integration & Delivery OpenShift Optional Links: Name: Source Code URL: https://github.com/red-hat-storage/ramen Maintainers: Email: ocs-support Name: Red Hat Support Maturity: alpha Provider: Name: Red Hat, Inc. Related Images: Image: registry.redhat.io/odf4/odr-rhel9-operator@sha256:da0137ce75b86ee906d14253f77451f93b3780651ae26344d0fbdb20ffde8759 Name: odr-operator Image: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:1dddb0988d1612c996707d43eb839bc49fc7e7554afaf085436eeddb37a12438 Name: rbac-proxy Version: 4.14.0-rhodf Status: Cleanup: Conditions: Last Transition Time: 2023-12-05T14:58:10Z Last Update Time: 2023-12-05T14:58:10Z Message: waiting for install components to report healthy Phase: Installing Reason: InstallSucceeded Last Transition Time: 2023-12-05T14:58:10Z Last Update Time: 2023-12-05T14:58:11Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Installing Reason: InstallWaiting Last Transition Time: 2023-12-05T14:58:21Z Last Update Time: 2023-12-05T14:58:21Z Message: install strategy completed with no errors Phase: Succeeded Reason: InstallSucceeded Last Transition Time: 2023-12-05T15:00:38Z Last Update Time: 2023-12-05T15:00:38Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Failed Reason: ComponentUnhealthy Last Transition Time: 2023-12-05T15:00:39Z Last Update Time: 2023-12-05T15:00:39Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Pending Reason: NeedsReinstall Last Transition Time: 2023-12-05T15:00:40Z Last Update Time: 2023-12-05T15:00:40Z Message: all requirements found, attempting install Phase: InstallReady Reason: AllRequirementsMet Last Transition Time: 2023-12-05T15:00:42Z Last Update Time: 2023-12-05T15:00:42Z Message: waiting for install components to report healthy Phase: Installing Reason: InstallSucceeded Last Transition Time: 2023-12-05T15:00:42Z Last Update Time: 2023-12-05T15:00:44Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Installing Reason: InstallWaiting Last Transition Time: 2023-12-05T15:05:41Z Last Update Time: 2023-12-05T15:05:41Z Message: install timeout Phase: Failed Reason: InstallCheckFailed Last Transition Time: 2023-12-05T15:05:42Z Last Update Time: 2023-12-05T15:05:42Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Pending Reason: NeedsReinstall Last Transition Time: 2023-12-05T15:05:43Z Last Update Time: 2023-12-05T15:05:43Z Message: all requirements found, attempting install Phase: InstallReady Reason: AllRequirementsMet Last Transition Time: 2023-12-05T15:05:43Z Last Update Time: 2023-12-05T15:05:43Z Message: waiting for install components to report healthy Phase: Installing Reason: InstallSucceeded Last Transition Time: 2023-12-05T15:05:43Z Last Update Time: 2023-12-05T15:05:44Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Installing Reason: InstallWaiting Last Transition Time: 2023-12-05T15:05:51Z Last Update Time: 2023-12-05T15:05:51Z Message: install strategy completed with no errors Phase: Succeeded Reason: InstallSucceeded Last Transition Time: 2023-12-05T15:08:06Z Last Update Time: 2023-12-05T15:08:06Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Failed Reason: ComponentUnhealthy Last Transition Time: 2023-12-05T15:08:07Z Last Update Time: 2023-12-05T15:08:07Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Pending Reason: NeedsReinstall Last Transition Time: 2023-12-05T15:08:07Z Last Update Time: 2023-12-05T15:08:07Z Message: all requirements found, attempting install Phase: InstallReady Reason: AllRequirementsMet Last Transition Time: 2023-12-05T15:08:07Z Last Update Time: 2023-12-05T15:08:07Z Message: waiting for install components to report healthy Phase: Installing Reason: InstallSucceeded Last Transition Time: 2023-12-05T15:08:07Z Last Update Time: 2023-12-05T15:08:08Z Message: installing: waiting for deployment ramen-dr-cluster-operator to become ready: deployment "ramen-dr-cluster-operator" not available: Deployment does not have minimum availability. Phase: Installing Reason: InstallWaiting Last Transition Time: 2023-12-05T15:09:50Z Last Update Time: 2023-12-05T15:09:50Z Message: csv created in namespace with multiple operatorgroups, can't pick one automatically Phase: Failed Reason: TooManyOperatorGroups Last Transition Time: 2023-12-05T15:09:50Z Last Update Time: 2023-12-05T15:09:50Z Message: csv created in namespace with multiple operatorgroups, can't pick one automatically Phase: Failed Reason: TooManyOperatorGroups Requirement Status: Group: apiextensions.k8s.io Kind: CustomResourceDefinition Message: CRD is present and Established condition is true Name: maintenancemodes.ramendr.openshift.io Status: Present Uuid: 44aeb401-8308-4166-92c7-f1ef574b4725 Version: v1 Group: apiextensions.k8s.io Kind: CustomResourceDefinition Message: CRD is present and Established condition is true Name: protectedvolumereplicationgrouplists.ramendr.openshift.io Status: Present Uuid: 814f9ae2-9bac-4973-a5e1-989c60848ab6 Version: v1 Group: apiextensions.k8s.io Kind: CustomResourceDefinition Message: CRD is present and Established condition is true Name: volumereplicationgroups.ramendr.openshift.io Status: Present Uuid: 88dfbbeb-f5d5-4bbe-af64-25941dfc38db Version: v1 Dependents: Group: rbac.authorization.k8s.io Kind: PolicyRule Message: namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":[""],"resources":["configmaps"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":["coordination.k8s.io"],"resources":["leases"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: namespaced rule:{"verbs":["create","patch"],"apiGroups":[""],"resources":["events"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: namespaced rule:{"verbs":["get"],"apiGroups":[""],"resources":["secrets"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["list","watch"],"apiGroups":[""],"resources":["configmaps"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","get","patch","update"],"apiGroups":[""],"resources":["events"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get","list","watch"],"apiGroups":[""],"resources":["pods"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":[""],"resources":["persistentvolumeclaims"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","get","list","patch","update","watch"],"apiGroups":[""],"resources":["persistentvolumes"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["ramendr.openshift.io"],"resources":["protectedvolumereplicationgrouplists"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["update"],"apiGroups":["ramendr.openshift.io"],"resources":["protectedvolumereplicationgrouplists/finalizers"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get","patch","update"],"apiGroups":["ramendr.openshift.io"],"resources":["protectedvolumereplicationgrouplists/status"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["ramendr.openshift.io"],"resources":["volumereplicationgroups"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["update"],"apiGroups":["ramendr.openshift.io"],"resources":["volumereplicationgroups/finalizers"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get","patch","update"],"apiGroups":["ramendr.openshift.io"],"resources":["volumereplicationgroups/status"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["replication.storage.openshift.io"],"resources":["volumereplications"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get","list","watch"],"apiGroups":["replication.storage.openshift.io"],"resources":["volumereplicationclasses"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","get","list","update","watch"],"apiGroups":["storage.k8s.io"],"resources":["storageclasses"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get","list","watch"],"apiGroups":["storage.k8s.io"],"resources":["volumeattachments"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["multicluster.x-k8s.io"],"resources":["serviceexports"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"apiGroups":["velero.io"],"resources":["backups"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get"],"apiGroups":["velero.io"],"resources":["backups/status"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","deletecollection","get","patch","update"],"apiGroups":["velero.io"],"resources":["backupstoragelocations"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"apiGroups":["velero.io"],"resources":["restores"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get"],"apiGroups":["velero.io"],"resources":["restores/status"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["volsync.backube"],"resources":["replicationdestinations"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["volsync.backube"],"resources":["replicationsources"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":[""],"resources":["secrets"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get","list","watch"],"apiGroups":["snapshot.storage.k8s.io"],"resources":["volumesnapshotclasses"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["delete","get","list","update","watch"],"apiGroups":["snapshot.storage.k8s.io"],"resources":["volumesnapshots"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["get","list","watch"],"apiGroups":["ramendr.openshift.io"],"resources":["recipes"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create"],"apiGroups":["authentication.k8s.io"],"resources":["tokenreviews"]} Status: Satisfied Version: v1 Group: rbac.authorization.k8s.io Kind: PolicyRule Message: cluster rule:{"verbs":["create"],"apiGroups":["authorization.k8s.io"],"resources":["subjectaccessreviews"]} Status: Satisfied Version: v1 Group: Kind: ServiceAccount Message: Name: ramen-dr-cluster-operator Status: Present Version: v1 Events: <none> Error from server (NotFound): clusterserviceversions.operators.coreos.com "clusterserviceversion.operators.coreos.com" not found See attachment for UI view of issue ODF must gather logs for all the 3 nodes at RH gdrive - https://drive.google.com/drive/folders/1CaxZaA-3JeLld8wnExJpIMptEiQjpzK2?usp=sharing Version of all relevant components (if applicable): ODF 4.14, OCP 4.14 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? yes Is there any workaround available to the best of your knowledge? May be uninstall and install Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Due to hardware constraints I could not keep this cluster running, but I captured ODF must gather logs as per ODF documentation. Please let me if we need to work on repro or other way
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383