Bug 2309620 - [RDR] backupFailedValidation error message seen in ramen-dr-cluster-operator pod logs
Summary: [RDR] backupFailedValidation error message seen in ramen-dr-cluster-operator ...
Keywords:
Status: CLOSED DUPLICATE of bug 2277941
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.17
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Karolin Seeger
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-09-04 05:38 UTC by Pratik Surve
Modified: 2024-09-04 12:53 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-09-04 12:53:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OCSBZM-8941 0 None None None 2024-09-04 05:41:47 UTC

Description Pratik Surve 2024-09-04 05:38:12 UTC
Description of problem (please be detailed as possible and provide log
snippests):

[RDR] backupFailedValidation error message seen in ramen-dr-cluster-operator pod logs  

Version of all relevant components (if applicable):

OCP version:- 4.17.0-0.nightly-2024-09-02-044025
ODF version:- 4.17.0-90
CEPH version:- ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)
ACM version:- 2.11.2
SUBMARINER version:- v0.18.0
VOLSYNC version:- volsync-product.v0.10.0
OADP version:- 1.4.0
VOLSYNC method:- destinationCopyMethod: Direct

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy RDR cluster with 4.16
2. Perform upgrade to 4.17
3. check ramen operator pod logs 


Actual results:
2024-09-04T05:32:37.235Z	INFO	controllers.VolumeReplicationGroup.vrginstance	velero/requests.go:646	Backup	{"VolumeReplicationGroup": {"name":"app-imp-1","namespace":"openshift-dr-ops"}, "rid": "dfccfd42-2c58-4add-8673-a889f68a09d3", "State": "primary", "phase": "FailedValidation", "warnings": 0, "errors": 0, "failure": "", "validation errors": ["an existing backup storage location wasn't specified at backup creation time and the default 'openshift-dr-ops--app-imp-1--0----s3profile-prsurve-c1-ocs-storagecluster' wasn't found. Please address this issue (see `velero backup-location -h` for options) and create a new backup. Error: BackupStorageLocation.velero.io \"openshift-dr-ops--app-imp-1--0----s3profile-prsurve-c1-ocs-storagecluster\" not found"]}
2024-09-04T05:32:37.236Z	ERROR	controllers.VolumeReplicationGroup.vrginstance	controller/vrg_kubeobjects.go:626	Kube objects group recover error	{"VolumeReplicationGroup": {"name":"app-imp-1","namespace":"openshift-dr-ops"}, "rid": "dfccfd42-2c58-4add-8673-a889f68a09d3", "State": "primary", "number": 0, "profile": "s3profile-prsurve-c1-ocs-storagecluster", "group": 0, "name": "", "error": "backupFailedValidation"}
github.com/ramendr/ramen/internal/controller.(*VRGInstance).kubeObjectsRecoveryStartOrResume
	/remote-source/app/internal/controller/vrg_kubeobjects.go:626
github.com/ramendr/ramen/internal/controller.(*VRGInstance).kubeObjectsRecover
	/remote-source/app/internal/controller/vrg_kubeobjects.go:501
github.com/ramendr/ramen/internal/controller.(*VRGInstance).restorePVsAndPVCsFromS3
	/remote-source/app/internal/controller/vrg_volrep.go:1940
github.com/ramendr/ramen/internal/controller.(*VRGInstance).restorePVsAndPVCsForVolRep
	/remote-source/app/internal/controller/vrg_volrep.go:1878
github.com/ramendr/ramen/internal/controller.(*VRGInstance).clusterDataRestore
	/remote-source/app/internal/controller/volumereplicationgroup_controller.go:618
github.com/ramendr/ramen/internal/controller.(*VRGInstance).processAsPrimary
	/remote-source/app/internal/controller/volumereplicationgroup_controller.go:895
github.com/ramendr/ramen/internal/controller.(*VRGInstance).processVRG
	/remote-source/app/internal/controller/volumereplicationgroup_controller.go:566
github.com/ramendr/ramen/internal/controller.(*VolumeReplicationGroupReconciler).Reconcile
	/remote-source/app/internal/controller/volumereplicationgroup_controller.go:453
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:119
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:316
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:227



Expected results:

There should not be any error in logs

Additional info:

Snippet from backup cr

Validation Errors:
    an existing backup storage location wasn't specified at backup creation time and the default 'openshift-dr-ops--app-imp-1--0----s3profile-prsurve-c1-ocs-storagecluster' wasn't found. Please address this issue (see `velero backup-location -h` for options) and create a new backup. Error: BackupStorageLocation.velero.io "openshift-dr-ops--app-imp-1--0----s3profile-prsurve-c1-ocs-storagecluster" not found
  Version:  1

$ oc get bsl -A
NAMESPACE       NAME                                                                          PHASE         LAST VALIDATED   AGE   DEFAULT
openshift-adp   openshift-dr-ops--app-imp-1--0----s3profile-prsurve-c1-ocs-storagecluster     Unavailable   70s              13h
openshift-adp   openshift-dr-ops--app-imp-1--0----s3profile-prsurve-vm-d-ocs-storagecluster   Unavailable   12s              17h
openshift-adp   openshift-dr-ops--app-imp-1--1----s3profile-prsurve-c1-ocs-storagecluster     Unavailable   40s              17h


Note You need to log in before you can comment on or make changes to this bug.