Bug 2031033 - VM migration from VMware fail on missing v2v-vmware ConfigMap in OCP-4.10/CNV-4.10
Summary: VM migration from VMware fail on missing v2v-vmware ConfigMap in OCP-4.10/CNV...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.10.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.10.0
Assignee: Matthew Arnold
QA Contact: Ilanit Stein
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-10 11:13 UTC by Ilanit Stein
Modified: 2022-03-16 15:57 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-16 15:57:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt containerized-data-importer pull 2102 0 None Merged Allow optional per-DataVolume VDDK image. 2022-02-06 07:08:40 UTC
Github kubevirt containerized-data-importer pull 2115 0 None Merged [release-v1.42] Allow optional per-DataVolume VDDK image. 2022-02-06 07:08:44 UTC
Red Hat Product Errata RHSA-2022:0947 0 None None None 2022-03-16 15:57:17 UTC

Description Ilanit Stein 2021-12-10 11:13:49 UTC
Description of problem:

For VMware a RHEL7 migration remain in the initialization phase.

cdi-deployment pof log error:
"error":"No v2v-vmware ConfigMap present in namespace openshift-cnv"

This is though vddk init image is configured by:
$ oc edit hco -n openshift-cnv kubevirt-hyperconvergedspec:
  vddkInitImage: <server>:5000/vddk-images/vddk:v702

$ oc logs cdi-deployment-76bb8c4759-gtxfj -n openshift-cnv | grep error
W1210 09:39:32.966210       1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:137: watc
h of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID
 249; INTERNAL_ERROR") has prevented the request from succeeding
{"level":"error","ts":1639130705.2753606,"logger":"controller-runtime.manager.controller.import-controller","msg":"Reconcile
r error","name":"plan1-vm-689-kvpv4","namespace":"istein","error":"No v2v-vmware ConfigMap present in namespace openshift-cn
v","errorVerbose":"No v2v-vmware ConfigMap present in namespace openshift-cnv\nkubevirt.io/containerized-data-importer/pkg/c
ontroller.(*ImportReconciler).getVddkImageName\n\t/remote-source/app/pkg/controller/import-controller.go:769\nkubevirt.io/co
ntainerized-data-importer/pkg/controller.(*ImportReconciler).createImporterPod\n\t/remote-source/app/pkg/controller/import-c
ontroller.go:493\nkubevirt.io/containerized-data-importer/pkg/controller.(*ImportReconciler).reconcilePvc\n\t/remote-source/
app/pkg/controller/import-controller.go:327\nkubevirt.io/containerized-data-importer/pkg/controller.(*ImportReconciler).Reco
ncile\n\t/remote-source/app/pkg/controller/import-controller.go:278\nsigs.k8s.io/controller-runtime/pkg/internal/controller.
(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controlle
r.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/app/ven
dor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/co
ntroller.(*Controller).Start.func1.2\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/con
troller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/remote-source/app/vendor/k8s.io/apimachin
ery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/app/vendor/k8s.io/apim
achinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source/app/vendor/k8s.io/apima
chinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/app/vendor/k8s.io/apimach
inery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/remote-source/app/vendor/k8s.i
o/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/remote-source/app/vendor/k8
s.io/apimachinery/pkg/util/wait/wait.go:99\nruntime.goexit\n\t/usr/lib/golang/src/runtime/asm_amd64.s:1371","stacktrace":"gi
thub.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/app/vendor/github.com/go-logr/zapr/zapr.go:132\nsigs.k8s.io/contr
oller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-run
time/pkg/internal/controller/controller.go:302\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).process
NextWorkItem\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s
.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/remote-source/app/vendor/sigs.k8s.io/controll
er-runtime/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/rem
ote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\
t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t
/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/r
emote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithCont
ext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithC
ontext\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99"}
{"level":"error","ts":1639130705.2776117,"logger":"controller.datavolume-controller","msg":"Unable to update datavolume","na
me":"plan1-vm-689-kvpv4","error":"Operation cannot be fulfilled on datavolumes.cdi.kubevirt.io \"plan1-vm-689-kvpv4\": the o
bject has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zap
r.(*zapLogger).Error\n\t/remote-source/app/vendor/github.com/go-logr/zapr/zapr.go:132\nkubevirt.io/containerized-data-import
er/pkg/controller.(*DatavolumeReconciler).emitEvent\n\t/remote-source/app/pkg/controller/datavolume-controller.go:2151\nkube
virt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).reconcileDataVolumeStatus\n\t/remote-source/app/p
kg/controller/datavolume-controller.go:2087\nkubevirt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).
Reconcile\n\t/remote-source/app/pkg/controller/datavolume-controller.go:500\nsigs.k8s.io/controller-runtime/pkg/internal/con
troller.(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/c
ontroller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source
/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/int
ernal/controller.(*Controller).Start.func1.2\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/contro
ller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/remote-source/app/vendor/k8s.io/a
pimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/app/vendor/k8s
.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source/app/vendor/k8s.
io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/app/vendor/k8s.io
/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/remote-source/app/vend
or/k8s.io/apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/remote-source/app/v
endor/k8s.io/apimachinery/pkg/util/wait/wait.go:99"}

Version-Release number of selected component (if applicable):
MTV-2.2.0-104
OCP-4.10.0
CNV-4.10.0

Comment 1 Fabien Dupont 2022-01-04 13:52:42 UTC
The issue happens in CDI, so moving the BZ to CNV/Storage. Assigning to Matthew.

Comment 2 Amos Mastbaum 2022-01-06 14:41:39 UTC
Same behaviour

OCP-4.10/MTV-2.3

Comment 3 Ilanit Stein 2022-02-06 07:19:02 UTC
Fixed on MTV-2.3.0-23.
to deploy it use:
   Index image v4.9:  iib:171521
   Index image v4.10: iib:171525

Sam Lucidi:

The meaning of this bug is that we can't have CNV to propagate the vddk init image to CDI. 
Starting MTV 2.3.0-23, the vddk init image will need to be specified on the Provider CR for vSphere migrations. 
Below is an example of a vSphere Provider CR with the vddkInitImage specified:


kind: Provider
apiVersion: forklift.konveyor.io/v1beta1
metadata:
  name: boston
  namespace: konveyor-forklift
spec:
  url: "https://<vcenter url>/sdk"
  type: vsphere
  settings:
    vddkInitImage: <vddk init image link>
  secret:
    namespace: konveyor-forklift
    name: boston

Comment 4 Ilanit Stein 2022-02-13 06:40:57 UTC
Verified on CNV-4.10/MTV-2.3.0-23
VM Migration from VMware was successful.

Comment 7 errata-xmlrpc 2022-03-16 15:57:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0947


Note You need to log in before you can comment on or make changes to this bug.