I am running OCP 4.5 (nightly) on virtual baremetal. CNV was installed with deploy_marketplace.sh from hyperconverged-cluster-operator. I am using the 'local' storage class. All of my PVs are pre-created. They are all 80Gi logical volumes. I am using volumeMode: Block. Import fails with the following in the CDI deployment pod logs: I0403 13:30:20.254185 1 importer.go:51] Starting importer I0403 13:30:20.257373 1 importer.go:107] begin import process I0403 13:30:20.286493 1 data-processor.go:253] Calculating available size I0403 13:30:20.289468 1 data-processor.go:261] Checking out file system volume size. I0403 13:30:20.289517 1 data-processor.go:265] Request image size not empty. I0403 13:30:20.289534 1 data-processor.go:270] Target size -1. I0403 13:30:20.289582 1 data-processor.go:183] New phase: Convert I0403 13:30:20.289597 1 data-processor.go:189] Validating image E0403 13:30:20.331551 1 data-processor.go:180] Virtual image size 10737418240 is larger than available size -1, shrink not yet supported. The virtual image size looks correct. The available size doesn't! I have attached (or will shortly) the following to this bug: * The CDI deployment pod yaml. * Storage class YAML * Example PV yaml (there are 12, all identical) * VM yaml, including DV definition
Created attachment 1676012 [details] cdi-deployment.yaml
Created attachment 1676013 [details] local-sc.yaml
Created attachment 1676014 [details] local-pv.yaml I picked the one which was currently bound. The others are all available.
Created attachment 1676015 [details] vm.yaml
Alexander, Has it been fixed already (upstream)?
No, this is still broken
(In reply to Alexander Wels from comment #6) > No, this is still broken Can you allocate some time to take a look at it please?
Following Adam's comment 7, adding need info.
Could you please update the status on this bug?
I am currently looking into the issue have not found the root cause yet.
Some updates: I just checked several different containers and not all of them have the binary needed to check the size of the block device, this will return the -1 we are seeing in the log. This is source of the problem. I am modifying the code to give a better error message if the binary is not there.
Created a PR to better report the missing binary instead of a weird -1.
Test with virt-cdi-importer-container-v2.4.0-25, when virtual image size is smaller than available size, import works well $ oc logs importer-osp-controller-root I0624 09:01:32.813598 1 importer.go:51] Starting importer I0624 09:01:32.816396 1 importer.go:112] begin import process I0624 09:01:32.856226 1 data-processor.go:277] Calculating available size I0624 09:01:32.857855 1 data-processor.go:285] Checking out block volume size. I0624 09:01:32.857871 1 data-processor.go:297] Request image size not empty. I0624 09:01:32.857880 1 data-processor.go:302] Target size 45Gi. I0624 09:01:32.857948 1 data-processor.go:206] New phase: Convert I0624 09:01:32.857957 1 data-processor.go:212] Validating image when virtual image size is larger than available size, both virtual size and available size are all correct, and got the expected error: Virtual image size 10737418240 is larger than available size 524288000. A larger PVC is required. But some trackback log also shown which may not friendly enough for user, could you please kindly help to confirm whether it's the expected behavior? Detailed log is as below: $ oc logs importer-osp-controller-root I0624 09:43:36.083904 1 importer.go:51] Starting importer I0624 09:43:36.099090 1 importer.go:112] begin import process I0624 09:43:36.133668 1 data-processor.go:277] Calculating available size I0624 09:43:36.135411 1 data-processor.go:285] Checking out block volume size. I0624 09:43:36.135427 1 data-processor.go:297] Request image size not empty. I0624 09:43:36.135453 1 data-processor.go:302] Target size 500Mi. I0624 09:43:36.135602 1 data-processor.go:206] New phase: Convert I0624 09:43:36.135620 1 data-processor.go:212] Validating image E0624 09:43:36.205437 1 data-processor.go:203] Virtual image size 10737418240 is larger than available size 524288000. A larger PVC is required. kubevirt.io/containerized-data-importer/pkg/image.(*qemuOperations).Validate /go/src/kubevirt.io/containerized-data-importer/pkg/image/qemu.go:193 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).validate /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:213 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).convert /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:222 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:190 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 Unable to convert source data to target format kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:192 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 E0624 09:43:36.205548 1 importer.go:159] Virtual image size 10737418240 is larger than available size 524288000. A larger PVC is required. kubevirt.io/containerized-data-importer/pkg/image.(*qemuOperations).Validate /go/src/kubevirt.io/containerized-data-importer/pkg/image/qemu.go:193 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).validate /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:213 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).convert /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:222 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:190 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357 Unable to convert source data to target format kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessDataWithPause /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:192 kubevirt.io/containerized-data-importer/pkg/importer.(*DataProcessor).ProcessData /go/src/kubevirt.io/containerized-data-importer/pkg/importer/data-processor.go:142 main.main /go/src/kubevirt.io/containerized-data-importer/cmd/cdi-importer/importer.go:157 runtime.main /usr/lib/golang/src/runtime/proc.go:203 runtime.goexit /usr/lib/golang/src/runtime/asm_amd64.s:1357
Yes the stack trace is fine, it is useful information for debugging problems. As long as the nice looking error message is in the conditions and/or events on the data volume we are happy.
OK, then I will move bug to verified, thanks for your explaining:)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3194