Description of problem: When trying to import a vm with 63 char long name (VMware Only!) , the vm progress bar in the UI stops at 75% and the controller log shows: 2020-11-18T08:15:35.449440854Z {"level":"error","ts":1605687335.4492824,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"virtualmachineimport-controller","name":"test-import","namespace":"amos","error":"Job.batch \"vmimport.v2v.kubevirt.io45mv2\" is invalid: [spec.template.spec.volumes[0].name: Invalid value: \"v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss-harddisk1\": must be no more than 63 characters, spec.template.spec.containers[0].volumeMounts[0].name: Not found: \"v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss-harddisk1\"]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:248\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:222\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:201\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. Using cli/ui import a vmware vm with 63 char long name 2. 3. Actual results: Expected results: Additional info: --- apiVersion: v1 kind: Secret metadata: name: vmw-secret type: Opaque stringData: vmware: |- # API URL of the vCenter or ESXi host apiUrl: "https://10.1.41.37/sdk" # Username provided in the format of username@domain. username: administrator password: Heslo123! # The certificate thumbprint of the vCenter or ESXi host, in colon-separated hexidecimal octets. thumbprint: 31:14:EB:9E:F1:78:68:10:A5:78:D1:A7:DF:BB:54:B7:1B:91:9F:30 --- apiVersion: v2v.kubevirt.io/v1beta1 kind: ResourceMapping metadata: name: resource-mapping spec: vmware: networkMappings: - source: id: network-14 target: name: pod namespace: "" storageMappings: - source: id: datastore-11 target: name: nfs - source: id: datastore-12 target: name: nfs --- apiVersion: v2v.kubevirt.io/v1beta1 kind: VirtualMachineImport metadata: name: test-import spec: providerCredentialsSecret: name: vmw-secret resourceMapping: name: resource-mapping targetVmName: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss startVm: false source: vmware: vm: name: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss apiVersion: v2v.kubevirt.io/v1beta1 kind: VirtualMachineImport metadata: annotations: vmimport.v2v.kubevirt.io/progress: "75" vmimport.v2v.kubevirt.io/source-vm-initial-state: down creationTimestamp: "2020-11-18T08:14:06Z" finalizers: - vmimport.v2v.kubevirt.io/cancelled-import generation: 1 managedFields: - apiVersion: v2v.kubevirt.io/v1beta1 fieldsType: FieldsV1 fieldsV1: f:spec: .: {} f:providerCredentialsSecret: .: {} f:name: {} f:resourceMapping: .: {} f:name: {} f:source: .: {} f:vmware: .: {} f:vm: .: {} f:name: {} f:startVm: {} f:targetVmName: {} manager: oc operation: Update time: "2020-11-18T08:14:06Z" - apiVersion: v2v.kubevirt.io/v1beta1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:vmimport.v2v.kubevirt.io/progress: {} f:vmimport.v2v.kubevirt.io/source-vm-initial-state: {} f:finalizers: .: {} v:"vmimport.v2v.kubevirt.io/cancelled-import": {} f:status: .: {} f:conditions: {} f:dataVolumes: {} f:targetVmName: {} manager: vm-import-controller operation: Update time: "2020-11-18T08:15:24Z" name: test-import namespace: amos resourceVersion: "2197250" selfLink: /apis/v2v.kubevirt.io/v1beta1/namespaces/amos/virtualmachineimports/test-import uid: 2ae927e4-32d7-48f3-8416-74f0412d1405 spec: providerCredentialsSecret: name: vmw-secret resourceMapping: name: resource-mapping source: vmware: vm: name: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss startVm: false targetVmName: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss status: conditions: - lastHeartbeatTime: "2020-11-18T08:14:06Z" lastTransitionTime: "2020-11-18T08:14:06Z" message: Validation completed successfully reason: ValidationCompleted status: "True" type: Valid - lastHeartbeatTime: "2020-11-18T08:14:06Z" lastTransitionTime: "2020-11-18T08:14:06Z" message: All mapping rules checks passed reason: MappingRulesVerificationCompleted status: "True" type: MappingRulesVerified - lastHeartbeatTime: "2020-11-18T08:15:23Z" lastTransitionTime: "2020-11-18T08:14:06Z" message: Copying virtual machine disks reason: CopyingDisks status: "True" type: Processing dataVolumes: - name: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss-harddisk1 targetVmName: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss [amastbau@amastbau single]$ apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: annotations: kubevirt.io/latest-observed-api-version: v1alpha3 kubevirt.io/storage-observed-api-version: v1alpha3 vmware-description: "" creationTimestamp: "2020-11-18T08:14:06Z" generation: 2 labels: app: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss flavor.template.kubevirt.io/medium: "true" os.template.kubevirt.io/rhel7.7: "true" tags: "" vm.kubevirt.io/template: rhel7-server-medium-v0.11.3 vm.kubevirt.io/template.namespace: openshift vm.kubevirt.io/template.revision: "1" vm.kubevirt.io/template.version: v0.12.3 workload.template.kubevirt.io/server: "true" managedFields: - apiVersion: kubevirt.io/v1alpha3 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:kubevirt.io/latest-observed-api-version: {} f:kubevirt.io/storage-observed-api-version: {} f:status: {} manager: virt-controller operation: Update time: "2020-11-18T08:14:06Z" - apiVersion: kubevirt.io/v1alpha3 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:vmware-description: {} f:labels: .: {} f:app: {} f:flavor.template.kubevirt.io/medium: {} f:os.template.kubevirt.io/rhel7.7: {} f:tags: {} f:vm.kubevirt.io/template: {} f:vm.kubevirt.io/template.namespace: {} f:vm.kubevirt.io/template.revision: {} f:vm.kubevirt.io/template.version: {} f:workload.template.kubevirt.io/server: {} f:ownerReferences: {} f:spec: .: {} f:running: {} f:template: .: {} f:metadata: .: {} f:creationTimestamp: {} f:labels: .: {} f:flavor.template.kubevirt.io/medium: {} f:kubevirt.io/domain: {} f:kubevirt.io/size: {} f:os.template.kubevirt.io/rhel7.7: {} f:vm.kubevirt.io/name: {} f:workload.template.kubevirt.io/server: {} f:spec: .: {} f:domain: .: {} f:clock: .: {} f:timer: {} f:utc: .: {} f:offsetSeconds: {} f:cpu: .: {} f:cores: {} f:sockets: {} f:devices: .: {} f:disks: {} f:inputs: {} f:interfaces: {} f:networkInterfaceMultiqueue: {} f:rng: {} f:features: .: {} f:acpi: {} f:firmware: .: {} f:bootloader: .: {} f:bios: {} f:serial: {} f:machine: .: {} f:type: {} f:resources: .: {} f:requests: .: {} f:memory: {} f:evictionStrategy: {} f:networks: {} f:terminationGracePeriodSeconds: {} f:volumes: {} manager: vm-import-controller operation: Update time: "2020-11-18T08:14:08Z" name: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss namespace: amos ownerReferences: - apiVersion: v2v.kubevirt.io/v1beta1 blockOwnerDeletion: true controller: true kind: VirtualMachineImport name: test-import uid: 2ae927e4-32d7-48f3-8416-74f0412d1405 resourceVersion: "2195827" selfLink: /apis/kubevirt.io/v1alpha3/namespaces/amos/virtualmachines/v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss uid: 025797fd-0a12-485f-96dd-9990234777d6 spec: running: false template: metadata: creationTimestamp: null labels: flavor.template.kubevirt.io/medium: "true" kubevirt.io/domain: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss kubevirt.io/size: medium os.template.kubevirt.io/rhel7.7: "true" vm.kubevirt.io/name: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss workload.template.kubevirt.io/server: "true" spec: domain: clock: timer: {} utc: offsetSeconds: 0 cpu: cores: 1 sockets: 1 devices: disks: - disk: bus: virtio name: dv-v2v-cirros-for-tests-char63longssssssssssssssssssssssssss-76 inputs: - bus: virtio name: tablet type: tablet interfaces: - macAddress: 02:10:F7:45:53:E5 masquerade: {} model: virtio name: vmnetwork networkInterfaceMultiqueue: true rng: {} features: acpi: {} firmware: bootloader: bios: {} serial: 5003dcc1-133d-521f-b0d0-439fd34ca0a5 machine: type: q35 resources: requests: memory: 2Gi evictionStrategy: LiveMigrate networks: - name: vmnetwork pod: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: v2v-cirros-for-tests-char63longssssssssssssssssssssssssssssssss-harddisk1 name: dv-v2v-cirros-for-tests-char63longssssssssssssssssssssssssss-76
It's normal that it fails with 64 chars. Kubernetes only accepts names with at most 63 chars: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names. Then, the other problem is that derived names may be longer than 63 chars. For example, if the VM name is 58 chars and the disk name is 10 chars, the derived DataVolume name is 68 chars, which is invalid. The idea is to use IDs to name the DataVolumes and Pods generated by VMIO. Say the VM UUID is 01234567-89ab-cdef-0123-456789abcd and the disk key is key-200, then the DataVolume name is 01234567-89ab-cdef-0123-456789abcd-key-200, which is always less than 63 chars.
https://github.com/kubevirt/vm-import-operator/pull/448
Verified on CNV-2.6.0: iib-37005 hco-v2.6.0-466 PVC name is id based and not VM name based: 420342fc-b49d-05bd-3074-ae877d44a7ab-2000,
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 2.6.0 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0799