Bug 1748957
Summary: | CRs are not being migrated | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Sergio <sregidor> |
Component: | Migration Tooling | Assignee: | Scott Seago <sseago> |
Status: | CLOSED ERRATA | QA Contact: | Sergio <sregidor> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.2.0 | CC: | chezhang, dymurray, jmatthew, rpattath, xjiang |
Target Milestone: | --- | ||
Target Release: | 4.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-05-28 11:09:55 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Sergio
2019-09-04 14:29:52 UTC
This is an upstream issue. I have already submitted a PR to include the relevant CRDs in the backup/restore which has been merged. There's a related upstream race condition -- if the newly-loaded CRD isn't yet ready, CR restore will still fail. There's an in-progress upstream PR for that. Once these are both merged, we should be able to include the fix, either when we update to the next Velero release, or if necessary, by cherry-picking the fixes into our internal velero build. The upstream fix that's already merged: https://github.com/vmware-tanzu/velero/pull/1831 The upstream fix that's still in progress: https://github.com/vmware-tanzu/velero/pull/1937 It looks like the upstream in-progress PR is being actively worked again. Once it's merged (and our velero is upgraded to 1.2) I can cherry-pick the upstream fix into our build. The already-merged fix is in velero 1.2 Oops. I updated the wrong PR. Disregard the above comment. The upstream commits from the (open) upstream PR have been cherry-picked into https://github.com/fusor/velero/pull/48 -- once that's tested and reviewed it can be merged. Once we upgrade to Velero 1.3, we will no longer need to carry this cherry-pick. We ran into some issues with further testing and believe more work is required to investigate a potential upstream problem. Moving this to next release as we missed the window to get this into CAM 1.1 Velero 1.3.1 should include the remaining part of the fix. Verified in CAM 1.2 stage Note: We detected that after creating the CRD, velero needs a short amount of time to realize of the CRD existence in order to be able to migrate this CRD's resources. In source cluster (4.2): $ oc get crds | grep deploycustom deploycustoms.samplecontroller.k8s.io 2020-05-07T09:49:12Z $ oc get deploycustom NAME AGE example-deployment 7m44s Result in target cluster (4.3): $ oc get crds | grep deploycustom deploycustoms.samplecontroller.k8s.io 2020-05-07T14:04:55Z $ oc get deploycustom NAME AGE example-deployment 96s Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2326 |